I have small cubes that make up a grid to make 3D cube. On each small cube I use a bitmap to texture the surface, but I want to use more then one picture. I can build more textures within loadTextures and add them tofinal int[] textureHandle = new int[1]; and return them. How do I instantiate them to each small cube I'm drawing though?
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
mLastRequestedCubeFactor = mActualCubeFactor = 3;
generateCubes(mActualCubeFactor, false, false);
// Set the background clear color to black.
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
// Use culling to remove back faces.
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
// Position the eye in front of the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -0.5f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = -5.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
final String vertexShader = RawResourceReader.readTextFileFromRawResource(mLessonSevenActivity, R.raw.lesson_seven_vertex_shader);
final String fragmentShader = RawResourceReader.readTextFileFromRawResource(mLessonSevenActivity, R.raw.lesson_seven_fragment_shader);
final int vertexShaderHandle = ShaderHelper.compileShader(GLES20.GL_VERTEX_SHADER, vertexShader);
final int fragmentShaderHandle = ShaderHelper.compileShader(GLES20.GL_FRAGMENT_SHADER, fragmentShader);
mProgramHandle = ShaderHelper.createAndLinkProgram(vertexShaderHandle, fragmentShaderHandle,
new String[] {"a_Position", "a_Normal", "a_TexCoordinate"});
// Load the texture
mAndroidDataHandle = TextureHelper.loadTexture(mLessonSevenActivity, R.drawable.usb_android);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mAndroidDataHandle);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mAndroidDataHandle);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
// Initialize the accumulated rotation matrix
Matrix.setIdentityM(mAccumulatedRotation, 0);
}
public class TextureHelper
{
public static int loadTexture(final Context context, final int resourceId)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling
// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
}
How do I instantiate them to each small cube I'm drawing though?
In short, you don't.
Even in desktop GL, where vertex instancing is a core feature, there is no way to change texture bindings without splitting the draw into multiple draw calls.
You could use a texture atlas, array texture or geometry shader to sample from a different (already bound) texture or a different part of a single texture. Alternatively, you could use bindless textures. Each one of those things I mentioned requires a newer version of GL than the last.
The only way to do this in ES is going to be either multiple draw calls, or a texture atlas/binding textures to multiple texture units. But since instancing is not a core feature, computing the texture coordinate / unit dynamically is a tremendous pain and will involve duplicating vertex data.
The bottom line is, what do you really mean by instantiated cube? Are you trying to draw 500 cubes in a single operation, or are you drawing them separately by calling some method in your cube class? Instancing has different meanings depending on the context.
Related
Using Android Studio, my code renders an array of floats as a texture passed to GLSL with one float per texel in the range of 0 to 1, like a grayscale texture. For that i use GL_LUMINANCE as the internalFormat and format for glTexImage2D and GL_FLOAT for type. Running the app on an android device emulator works fine (which uses my PC's GPU), but on a real device (Samsung Galaxy S7) calling glTexImage2D gives error 1282, GL_INVALID_OPERATION. I thought it might be a problem with non power of two textures, but the width and height are certainly powers of two.
The code uses Jos Stam fluid simulation C code (compiled with the NDK, not ported) which outputs density values for a grid.
mSizeN is the width (same as height) of the fluid simulation grid, although 2 is added to it by the fluid sim for boundary conditions, so the width of the array returned is mSizeN + 2; 128 in this case.
The coordinate system is set up as an orthographic projection with 0.0,0.0 the top left of the screen, 1.0,1.0 is the bottom right. I just draw a full screen quad and use the interpolated position across the quad in GLSL as texture coordinates to the array containing density values. Nice easy way to render it.
This is the renderer class.
public class GLFluidsimRenderer implements GLWallpaperService.Renderer {
private final String TAG = "GLFluidsimRenderer";
private FluidSolver mFluidSolver = new FluidSolver();
private float[] mProjectionMatrix = new float[16];
private final FloatBuffer mFullScreenQuadVertices;
private Context mActivityContext;
private int mProgramHandle;
private int mProjectionMatrixHandle;
private int mDensityArrayHandle;
private int mPositionHandle;
private int mGridSizeHandle;
private final int mBytesPerFloat = 4;
private final int mStrideBytes = 3 * mBytesPerFloat;
private final int mPositionOffset = 0;
private final int mPositionDataSize = 3;
private int mDensityTexId;
public static int mSizeN = 126;
public GLFluidsimRenderer(final Context activityContext) {
mActivityContext = activityContext;
final float[] fullScreenQuadVerticesData = {
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
mFullScreenQuadVertices = ByteBuffer.allocateDirect(fullScreenQuadVerticesData.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mFullScreenQuadVertices.put(fullScreenQuadVerticesData).position(0);
}
public void onTouchEvent(MotionEvent event) {
}
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
String vertShader = AssetReader.getStringAsset(mActivityContext, "fluidVertShader");
String fragShader = AssetReader.getStringAsset(mActivityContext, "fluidFragDensityShader");
final int vertexShaderHandle = ShaderHelper.compileShader(GLES20.GL_VERTEX_SHADER, vertShader);
final int fragmentShaderHandle = ShaderHelper.compileShader(GLES20.GL_FRAGMENT_SHADER, fragShader);
mProgramHandle = ShaderHelper.createAndLinkProgram(vertexShaderHandle, fragmentShaderHandle,
new String[] {"a_Position"});
mDensityTexId = TextureHelper.loadTextureLumF(mActivityContext, null, mSizeN + 2, mSizeN + 2);
}
#Override
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
mFluidSolver.init(width, height, mSizeN);
GLES20.glViewport(0, 0, width, height);
Matrix.setIdentityM(mProjectionMatrix, 0);
Matrix.orthoM(mProjectionMatrix, 0, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f);
}
#Override
public void onDrawFrame(GL10 glUnused) {
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUseProgram(mProgramHandle);
mFluidSolver.step();
TextureHelper.updateTextureLumF(mFluidSolver.get_density(), mDensityTexId, mSizeN + 2, mSizeN + 2);
mProjectionMatrixHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_ProjectionMatrix");
mDensityArrayHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_aDensity");
mGridSizeHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_GridSize");
mPositionHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_Position");
double start = System.nanoTime();
drawQuad(mFullScreenQuadVertices);
double end = System.nanoTime();
}
private void drawQuad(final FloatBuffer aQuadBuffer) {
// Pass in the position information
aQuadBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aQuadBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Attach density array to texture unit 0
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mDensityTexId);
GLES20.glUniform1i(mDensityArrayHandle, 0);
// Pass in the actual size of the grid.
GLES20.glUniform1i(mGridSizeHandle, mSizeN + 2);
GLES20.glUniformMatrix4fv(mProjectionMatrixHandle, 1, false, mProjectionMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
}
Here's the texture helper functions.
public static int loadTextureLumF(final Context context, final float[] data, final int width, final int height) {
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0) {
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glPixelStorei(GLES20.GL_UNPACK_ALIGNMENT, 1);
GLES20.glPixelStorei(GLES20.GL_PACK_ALIGNMENT, 1);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE,
(int) width, (int) height, 0, GLES20.GL_LUMINANCE, GLES20.GL_FLOAT,
(data != null ? FloatBuffer.wrap(data) : null));
}
if (textureHandle[0] == 0)
throw new RuntimeException("Error loading texture.");
return textureHandle[0];
}
public static void updateTextureLumF(final float[] data, final int texId, final int w, final int h) {
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texId);
GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, (int)w, (int)h, GLES20.GL_LUMINANCE, GLES20.GL_FLOAT, (data != null ? FloatBuffer.wrap(data) : null));
}
Fragment shader.
precision mediump float;
uniform sampler2D u_aDensity;
uniform int u_GridSize;
varying vec4 v_Color;
varying vec4 v_Position;
void main()
{
gl_FragColor = texture2D(u_aDensity, vec2(v_Position.x, v_Position.y));
}
Is the combination of GL_FLOAT and GL_LUMINANCE unsupported in OpenGL ES 2?
android emulator pic.
edit:
To add, am i right in saying that each floating point value will be reduced to an 8-bit integer component when transferred with glTexImage2D (etc), so the majority of the floating point precision will be lost? In that case, it might be best to rethink the implementation of the simulator to output fixed point. That can be done easily, Stam even describes it in his paper.
Table 3.4 of the spec shows the "Valid pixel format and type combinations" for use with glTexImage2D. For GL_LUMINANCE, the only option is GL_UNSIGNED_BYTE.
OES_texture_float is the relevant extension you'd need to check for.
An alternative approach which would work on more devices is to pack your data in multiple channels of an RGBA. Here is some discussion about packing a float value into an 8888. Note, however, that not all OpenGLES2 devices even support 8888 render targets, you might have to pack into a 4444.
Or you could use OpenGLES 3. Android is up to 61.3% support of OpenGLES3 according to this.
EDIT: On re-reading more carefully, there probably isn't any benefit in using any higher than an 8-bit texture, because when you write the texture to gl_FragColor in your fragment shader you are copying into a 565 or 8888 framebuffer, so any extra precision is lost anyway at that point.
I'm trying to change the texture image displayed on an object in my opengl view in response to a button click. Currently, when the user presses a button I set a flag to true in my OpenGL Renderer thread. My cross-thread communication seems fine: I am able to successfully toggle flag variables in the Renderer thread at will. My problem seems to be with the Open GL onDraw() workflow, I can't figure out how to systematically change a texture on an object after a texture has already been set during the Renderer's initial onSurfaceCreated() execution.
It seems that fretBoardTexture = new OpenGL_TextureData(mActivityContext, R.drawable.dark_wood); never works outside of onSurfaceCreated(), why?
Below is the code i've tried. Am I missing a sequence of GLES20() method invocations somewhere?
OpenGL Renderer Class
private int chosenWoodColor = 1;
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
...
loadFretBoardTexture(); //this call works: it succeeds in changing texture
}
public void onDrawFrame(GL10 glUnused) {
if (flag){
loadFretBoardTexture(); //this call fails: it never changes the texture
}
...
//***Fretboard OpenGL_TextureData Binding***
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, fretBoard._textureID);
//Draw Background
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, 0.0f, -1.0f);
drawFretBoard(); //draw the fretboard
...
}
public void loadFretBoardTexture(){
switch (chosenWoodColor){
case 1:
fretBoardTexture = new OpenGL_TextureData(mActivityContext, R.drawable.dark_wood);
break;
case 2:
fretBoardTexture = new OpenGL_TextureData(mActivityContext, R.drawable.medium_wood);
break;
case 3:
fretBoardTexture = new OpenGL_TextureData(mActivityContext, R.drawable.light_wood);
break;
}
setFretboardTextureRefresh(false);
return;
}
Texture Data Helper Class
public class OpenGL_TextureData {
public int textureID;
public float imageWidth, imageHeight;
public OpenGL_TextureData(Context context, int resourceId) { //constructor
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0) {
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling
// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);
//fetch texture image width & height
imageWidth = options.outWidth;
imageHeight = options.outHeight;
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0) {
throw new RuntimeException("Error loading texture.");
}
textureID = textureHandle[0];
}
}
Solved.
The problem was that I was creating a new texture object and loading a bitmap to it, but since the texture object was already created in my onSurfaceCreated() method this was undesired: I really just needed to load a bitmap onto the already existing texture object.
I added the following method to my Texture data helper class:
public void updateTexture(Context context, int resourceId) {
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options); //Read in the resource
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 1);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
Inspiration drawn from here
I am new to Android OpenGl2.0.and I have got one issue while creating 3D model from .obj file.
While rendering the 3D model,I am getting blank screen.
Sharing the code below,
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
// Enable texture mapping
GLES20.glEnable(GLES20.GL_TEXTURE_2D);
// Position the eye in front of the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -0.5f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = -8.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
final String vertexShader = RawResourceReader.readTextFileFromRawResource(mActivityContext, R.raw.per_pixel_vertex_shader_tex_and_light);
final String fragmentShader = RawResourceReader.readTextFileFromRawResource(mActivityContext, R.raw.per_pixel_fragment_shader_tex_and_light);
final int vertexShaderHandle = ShaderHelper.compileShader(GLES20.GL_VERTEX_SHADER, vertexShader);
final int fragmentShaderHandle = ShaderHelper.compileShader(GLES20.GL_FRAGMENT_SHADER, fragmentShader);
mProgramHandle = ShaderHelper.createAndLinkProgram(vertexShaderHandle, fragmentShaderHandle,
new String[] {"a_Position", "a_Normal", "a_TexCoordinate"});
// Load the texture
mTextureDataHandle = TextureHelper.loadTexture(mActivityContext, R.drawable.bumpy_bricks_public_domain);
// Initialize the accumulated rotation matrix
Matrix.setIdentityM(mAccumulatedRotation, 0);
}
*Load texture Function()
// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
OnDrawFrame() Of Renderer-
// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);
....
GLES20.glUniform1i(mTextureUniformHandle, 0);
// Pass in the position information
mCubePositions.position(0);
GLES20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE_IN_ELEMENTS, GLES20.GL_FLOAT, false, 0, mCubePositions);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the normal information
mCubeNormals.position(0);
GLES20.glVertexAttribPointer(mNormalHandle, NORMAL_DATA_SIZE_IN_ELEMENTS, GLES20.GL_FLOAT, false, 0, mCubeNormals);
GLES20.glEnableVertexAttribArray(mNormalHandle);
// Pass in the texture information
mCubeTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHandle, TEXTURE_DATA_SIZE_IN_ELEMENTS, GLES20.GL_FLOAT, false,
0, mCubeTextureCoordinates);
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP ,m3DModel.getIndicesLength(), GLES20.GL_UNSIGNED_SHORT, m3DModel.getIndices());
if anyone have an idea or knows this issue,Please give me reply,
I am not getting where i did the mistake in the code,
I am trying to display a single texture on a quad.
I had a working VertexObject, which drew a square(or any geometric object) fine. Now I tried expanding it to handle textures too, and the textures doesn't work. I only see the quad in one solid color.
The coordinate data is in an arrayList:
/*the vertices' coordinates*/
public int coordCount = 0;
/*float array of 3(x,y,z)*/
public ArrayList<Float> coordList = new ArrayList<Float>(coordCount);
/*the coordinates' indexes(if used)*/
/*maximum limit:32767*/
private int orderCount = 0;
private ArrayList<Short> orderList = new ArrayList<Short>(orderCount);
/*textures*/
public boolean textured;
private boolean textureIsReady;
private ArrayList<Float> textureList = new ArrayList<Float>(coordCount);
private Bitmap bitmap; //the image to be displayed
private int textures[]; //the textures' ids
The buffers are initialized in the following function:
/*Drawing is based on the buffers*/
public void refreshBuffers(){
/*Coordinates' List*/
float coords[] = new float[coordList.size()];
for(int i=0;i<coordList.size();i++){
coords[i]= coordList.get(i);
}
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (number of coordinate values * 4 bytes per float)
coords.length * 4);
// use the device hardware's native byte order
bb.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
vertexBuffer = bb.asFloatBuffer();
// add the coordinates to the FloatBuffer
vertexBuffer.put(coords);
// set the buffer to read the first coordinate
vertexBuffer.position(0);
/*Index List*/
short order[] = new short[(short)orderList.size()];
for(int i=0;i<order.length;i++){
order[i] = (short) orderList.get(i);
}
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
order.length * 2);
dlb.order(ByteOrder.nativeOrder());
orderBuffer = dlb.asShortBuffer();
orderBuffer.put(order);
orderBuffer.position(0);
/*texture list*/
if(textured){
float textureCoords[] = new float[textureList.size()];
for(int i=0;i<textureList.size();i++){
textureCoords[i] = textureList.get(i);
}
ByteBuffer byteBuf = ByteBuffer.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
textureBuffer = byteBuf.asFloatBuffer();
textureBuffer.put(textureCoords);
textureBuffer.position(0);
}
}
I load the image into the object with the following code:
public void initTexture(GL10 gl, Bitmap inBitmap){
bitmap = inBitmap;
loadTexture(gl);
textureIsReady = true;
}
/*http://www.jayway.com/2010/12/30/opengl-es-tutorial-for-android-part-vi-textures/*/
public void loadTexture(GL10 gl){
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
/*bind bitmap to texture*/
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
And the drawing happens based on this code:
public void draw(GL10 gl){
if(textured && textureIsReady){
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//loadTexture(gl);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0,
vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0,
textureBuffer);
}else{
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glColor4f(color[0], color[1], color[2], color[3]);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0,
vertexBuffer);
}
if(!indexed)gl.glDrawArrays(drawMode, 0, coordCount);
else gl.glDrawElements(drawMode, orderCount, GL10.GL_UNSIGNED_SHORT, orderBuffer);
if(textured && textureIsReady){
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDisable(GL10.GL_TEXTURE_2D);
}else{
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
}
}
The initialization is as follows:
pic = new VertexObject();
pic.indexed = true;
pic.textured = true;
pic.initTexture(gl,MainActivity.bp);
pic.color[0] = 0.0f;
pic.color[1] = 0.0f;
pic.color[2] = 0.0f;
float inputVertex[] = {2.0f,2.0f,0.0f};
float inputTexture[] = {0.0f,0.0f};
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 2.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 0.0f;
inputTexture[0] = 1.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 8.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 1.0f;
inputTexture[0] = 1.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 8.0f;
inputVertex[1] = 2.0f;
inputTexture[0] = 1.0f;
inputTexture[0] = 0.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
pic.addIndex((short)0);
pic.addIndex((short)1);
pic.addIndex((short)2);
pic.addIndex((short)0);
pic.addIndex((short)2);
pic.addIndex((short)3);
The coordinates are just simply added to the arrayList, and then I refresh the buffers.
The bitmap is valid, because it is showing up on an imageView.
The image is a png file with the size of 128x128 in the drawable folder.
For what I gathered the image is getting to the vertexObject, but something isn't right with the texture mapping. Any pointers on what am I doing wrong?
Okay, I got it!
I downloaded a working example from the internet and rewrote it, to resemble the object(presented above) step by step. I observed if it works on every step. Turns out, the problem isn't in the graphical part, because the object worked in another context with different coordinates.
Long story short:
I got the texture UV mapping wrong!
That's why I got the solid color, the texture was loaded, but the UV mapping wasn't correct.
Short story long:
At the lines
inputVertex[0] = 2.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 0.0f;
inputTexture[0] = 1.0f;
The indexing was wrong as only the first element of inputTexture was updated only. There might have been some additional errors regarding the sizes of the different array describing the vertex coordinates, but rewriting on the linked example fixed the problem, and it produced a mroe concise code.
I contact because, I try to use openGL with android, in order to make a 2D game :)
Here is my way of working:
I have a class GlRender
public class GlRenderer implements Renderer
In this class, on onDrawFrame I do
GameRender() and GameDisplay()
And on gameDisplay() I have:
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Reset the Modelview Matrix
gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Modelview Matrix
gl.glLoadIdentity(); //Reset The Modelview Matrix
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
for(Sprites...)
{
sprite.draw(gl, att.getX(), att.getY());
}
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
And in the draw method of sprite I have:
_vertices[0] = x;
_vertices[1] = y;
_vertices[3] = x;
_vertices[4] = y + height;
_vertices[6] = x + width;
_vertices[7] = y;
_vertices[9] = x + width;
_vertices[10] = y + height;
if(vertexBuffer != null)
{
vertexBuffer.clear();
}
// fill the vertexBuffer with the vertices
vertexBuffer.put(_vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer.mByteBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer.mByteBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, _vertices.length / 3);
My problem is that I have a low frame rate, even at 30 FPS I loose some frame sometimes with only 1 sprite (but it is the same with 50)
Am I doing something wrong? How can I improve FPS?
In general, you should not be changing your vertex buffer for every sprite drawn. And by "in general", I pretty much mean "never," unless you're making a particle system. And even then, you would use proper streaming techniques, not write a quad at a time.
For each sprite, you have a pre-built quad. To render it, you use shader uniforms to transform the sprite from a neutral position to the actual position you want to see it on screen.