I'm currently developing an .obj file loader on Android. I have done the basics, and the 3d mesh is drawn correctly with OpenGl. Unfortunately I have a problem in binding the texture. Let me explain with more details:
The .obj file has the following structure:
v -0.751804 0.447968 -1.430558
v -0.751802 2.392585 -1.428428
... etc list with all the vertices ...
vt 0.033607 0.718905
vt 0.033607 0.718615
... etc list with all the texture coordinates ...
f 237/1 236/2 253/3 252/4
f 236/2 235/5 254/6 253/3
... etc list with all the faces ...
The f lines idicate the index where the appropriate vertex and texture coordinates are stored, like
f vertex_index/texture_coord_index
So my program
parses the vertices and stores them in a Vector<Float>,
parses the texture coordinates and stores them in a Vector<Float>
and finally parses the faces and stores every vertex index in a Vector<Short> and every texture coordinate index in a Vector<Short>
After all this code, I'm creating the appropriate buffers:
public void buildVertexBuffer(){
ByteBuffer vBuf = ByteBuffer.allocateDirect(vertices.size() * 4);
vBuf.order(ByteOrder.nativeOrder());
vertexBuffer = vBuf.asFloatBuffer();
vertexBuffer.put(toFloatArray(vertices));
vertexBuffer.position(0);
}
where vertices is the vector that stores the float vertices
public void buildFaceBuffer(){
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(faces.size() * 2);
byteBuffer.order(ByteOrder.nativeOrder());
faceBuffer = byteBuffer.asShortBuffer();
faceBuffer.put(toShortArray(faces));
faceBuffer.position(0);
}
where faces is the vector that stores the indices and
public void buildTextureBuffer(Vector<Float> textures){
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(texturePointers.size() * 4 * 2);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
for(int i=0; i<texturePointers.size(); i++){
float u = textures.get(texturePointers.get(i) * 2);
float v = textures.get(texturePointers.get(i) * 2 + 1);
textureBuffer.put(u);
textureBuffer.put(v);
}
textureBuffer.position(0);
}
where textures are the float texture coordinates and texturePointers point to the textures' values.
The binding happens here:
public int[] loadTexture(GL10 gl, Context context){
if(textureFile == null)
return null;
int resId = getResourceId(textureFile, R.drawable.class);
if(resId == -1){
Log.d("Bako", "Texture not found...");
return null;
}
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resId);
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
/*gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);*/
/*int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer buffer = ByteBuffer.allocateDirect(size);
bitmap.copyPixelsToBuffer(buffer);
gl.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(), bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_INT, buffer);*/
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
return textures;
}
Finally my draw() method of my mesh looks like this
public void draw(GL10 gl){
if(bindedTextures != null){
gl.glBindTexture(GL10.GL_TEXTURE_2D, bindedTextures[0]);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
}
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
for(int i=0; i<parts.size(); i++){
ModelPart modelPart = parts.get(i);
Material material = modelPart.getMaterial();
if(material != null){
FloatBuffer a = material.getAmbientColorBuffer();
FloatBuffer d = material.getDiffuseColorBuffer();
FloatBuffer s = material.getSpecularColorBuffer();
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, a);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR, s);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, d);
}
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, modelPart.getTextureBuffer()); // returns the texture buffer created with the buildTextureBuffer() method
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
gl.glNormalPointer(GL10.GL_FLOAT, 0, modelPart.getNormalBuffer());
gl.glDrawElements(GL10.GL_TRIANGLES, modelPart.getFacesSize(), GL10.GL_UNSIGNED_SHORT, modelPart.getFaceBuffer());
//gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
//gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
gl.glDisableClientState(GL10.GL_NORMAL_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
}
When i run this application the 3d model is drawn like a charm, but the texture is somehow streched. The image that contains the texture has a red background with the appropriate image in the center and this red background is drawn onto the whole 3d model.
Well my first question is if the textureBuffer built correctly. Do i have to change the code in buildTextureBuffer() ?
And the second one; is the the draw() method correct? Does my problem have to be with the faces buffer?
So, in OpenGL a vertex is whatever combination of information describes a particular point on the model. You're using the old fixed pipeline so a vertex is one or more of a location, some texture coordinates, a normal and a colour.
In OBJ a vt is only a location. The thing that maps to the OpenGL concept of a vertex is each unique combination of — in your case — location + texture coordinate pairs given after an f.
You need a mapping whereby if the f says 56/92 then you can lookup the 56/92 and find out that you consider that to be, say, vertex 23 and have communicated suitable arrays to OpenGL such that the location value in slot 23 was the 56th thing the OBJ gave as a v and the 92nd thing it gave as a vt.
Putting it another way, OBJ files have an extra level of indirection that OpenGL does not.
It looks to me like you're not resolving that difference. A common approach would be to use a HashMap from v/vt pair to output index, building your output arrays on demand as you parse the f.
Related
in my android App, i have a Frame Buffer Object that takes me the rendered Scene As a texture.
the app is an Origami game and user can fold a paper freely:
in every Fold, the current rendered scene saves to a texture using fbo and then i redraw the paper with new coordinates with new texture attached to it, to seem like folded paper. and this way the user can fold the paper as many time as he wants.
I want in every frame Check the rendered scene, to determinate does the user riches to the final shape (assume that i have the final shape in a 2d-array with 0 and 1 filled, 0 for transparency and 1 for colored pixels)
what i want, is to some How, Convert this Texture to A 2d-Array filled with 0 and 1,
0 for transparency pixel, and 1 for Colored pixel of texture.
i need this to then compare this result with a previously Known 2d-Array to determinate if the texture is the shape i want or not.
is it possible to save the texture data to an array?
i cant use glreadPixels because it is so heavy and its not possible to call it every frame.
here is the FBO class (i want to have renderTex[0] as array):
public class FBO {
int [] fb, renderTex;
int texW;
int texH;
public FBO(int width,int height){
texW = width;
texH = height;
fb = new int[1];
renderTex= new int[1];
}
public void setup(GL10 gl){
// generate
((GL11ExtensionPack)gl).glGenFramebuffersOES(1, fb, 0);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glGenTextures(1, renderTex, 0);// generate texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
//texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer();
//gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,GL10.GL_MODULATE);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, texW, texH, 0, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, null);
gl.glDisable(GL10.GL_TEXTURE_2D);
}
public boolean RenderStart(GL10 gl){
Log.d("TextureAndFBO", ""+renderTex[0] + " And " +fb[0]);
// Bind the framebuffer
((GL11ExtensionPack)gl).glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]);
// specify texture as color attachment
((GL11ExtensionPack)gl).glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0);
int error = gl.glGetError();
if (error != GL10.GL_NO_ERROR) {
Log.d("err", "FIRST Background Load GLError: " + error+" ");
}
int status = ((GL11ExtensionPack)gl).glCheckFramebufferStatusOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES);
if (status != GL11ExtensionPack.GL_FRAMEBUFFER_COMPLETE_OES)
{
Log.d("err", "SECOND Background Load GLError: " + status+" ");;
return true;
}
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
return true;
}
public void RenderEnd(GL10 gl){
((GL11ExtensionPack)gl).glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, 0);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glBindTexture(GL10.GL_TEXTURE_2D, 0);
gl.glColor4f(1.0f,1.0f,1.0f,1.0f);
gl.glDisable(GL10.GL_TEXTURE_2D);
}
public int getTexture(){
return renderTex[0];
}
public int getFBO(){
return fb[0];
}
}
If you are using openGL ES 3.0 and later then pbo would be a good solution. But I think you can use EGLImage. Because this only needs OpenGL ES 1.1 or 2.0.
The function to create an EGLImageKHR is:
EGLImageKHR eglCreateImageKHR(EGLDisplay dpy,
EGLContext ctx,
EGLenum target,
EGLClientBuffer buffer,
const EGLint *attrib_list)
To allocate an ANativeWindowBuffer, Android has a simple wrapper called GraphicBuffer:
GraphicBuffer *window = new GraphicBuffer(width, height, PIXEL_FORMAT_RGBA_8888, GraphicBuffer::USAGE_SW_READ_OFTEN | GraphicBuffer::USAGE_HW_TEXTURE);
struct ANativeWindowBuffer *buffer = window->getNativeBuffer();
EGLImageKHR *image = eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, *attribs);
to read pixels from an FBO use one of these two methods below:
void EGLImageTargetTexture2DOES(enum target, eglImageOES image)
void EGLImageTargetRenderbufferStorageOES(enum target, eglImageOES image)
These two methods will esablish all the properties of the target GL_TEXTURE_2D or GL_RENDERBUFFER
uint8_t *ptr;
glBindTexture(GL_TEXTURE_2D, texture_id);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, image);
window->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &ptr);
memcpy(pixels, ptr, width * height * 4);
window->unlock();
To accomplish what you want, you need to use a PBO (Pixel Buffer Object): You can map it to an array to read it if it were a regular array.
OpenGL ARB_pixel_buffer_object extension is very close to
ARB_vertex_buffer_object. It simply expands ARB_vertex_buffer_object
extension in order to store not only vertex data but also pixel data
into the buffer objects. This buffer object storing pixel data is
called Pixel Buffer Object (PBO). ARB_pixel_buffer_object extension
borrows all VBO framework and APIs, plus, adds 2 additional "target"
tokens. These tokens assist the PBO memory manger (OpenGL driver) to
determine the best location of the buffer object; system memory,
shared memory or video memory. Also, the target tokens clearly specify
that the bound PBO will be used in one of 2 different operations;
GL_PIXEL_PACK_BUFFER_ARB to transfer pixel data to a PBO, or
GL_PIXEL_UNPACK_BUFFER_ARB to transfer pixel data from PBO.
It can be created similiar to other buffer objects:
glGenBuffers(1, &pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, size, 0, GL_DYNAMIC_READ);
Then you can read from an FBO (or a texture) easily:
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, 0);
GLubyte *array = (GLubyte*)glMapBufferRange(GL_PIXEL_PACK_BUFFER, 0, size, GL_MAP_READ_BIT);
// TODO: Do your checking of the shape inside of this 'array' pointer or copy it somewhere using memcpy()
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
Here GL_COLOR_ATTACHMENT0 is used as input - see the specification of glReadBuffer for further details how to specify front or backbuffer to be used.
I'm trying to render a subdivided mesh with a displacement texture on it and a color texture. To do so I go through every pixel, create a vertex for it, and move that vertex according to a black and white image I have. The problem is that when I render it, I get something that looks a bit like TV snow.
Here's the relevant code:
public Plane(Bitmap image, Bitmap depth)
{
this.image = image; //color image
this.depth = depth; //BW depth image
this.w = image.getWidth();
this.h = image.getHeight();
vertexCoords = vertexArray(); //places vertices in 3d
drawOrder = orderArray(); //sets the draw order
colorCoords = colorArray(); //sets color per vertex
ByteBuffer bb = ByteBuffer.allocateDirect(vertexCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(vertexCoords);
vertexBuffer.position(0);
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 4);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
ByteBuffer cbb = ByteBuffer.allocateDirect(colorCoords.length * 4);
cbb.order(ByteOrder.nativeOrder());
colorBuffer = cbb.asFloatBuffer();
colorBuffer.put(colorCoords);
colorBuffer.position(0);
}
public void draw(GL10 gl) {
// Counter-clockwise winding.
gl.glFrontFace(GL10.GL_CCW);
// Enable face culling.
gl.glEnable(GL10.GL_CULL_FACE);
// What faces to remove with the face culling.
gl.glCullFace(GL10.GL_BACK);
// Enabled the vertices buffer for writing and to be used during
// rendering.
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
// Enable the color array buffer to be used during rendering.
gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // NEW LINE ADDED.
// Point out the where the color buffer is.
gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer); // NEW LINE ADDED.
gl.glDrawElements(GL10.GL_TRIANGLES, drawOrder.length,
GL10.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
// Disable face culling.
gl.glDisable(GL10.GL_CULL_FACE);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
}
What can I do to actually view the model, instead of this snow thing? The patterns change if I turn my screen on and off, and they sometimes change randomly. It seems that the colors present in the original bitmap are also present in the snow (the snow color changes with different pictures), so I know I'm doing something right, I just don't know what's wrong here.
EDIT: here's the code for vertexArray()
public float[] vertexArray()
{
int totalPoints = w*h;
float[] arr = new float[totalPoints*3];
int i = 0;
for(int y = 0; y<h; y++)
{
for(int x = 0; x<w; x++)
{
arr[i] = x * 0.01f;
arr[i+1] = y * 0.01f;
arr[i+2] = 1.0f;//getDepth(x,y);
i+=3;
}
}
return arr;
}
I am trying to display a single texture on a quad.
I had a working VertexObject, which drew a square(or any geometric object) fine. Now I tried expanding it to handle textures too, and the textures doesn't work. I only see the quad in one solid color.
The coordinate data is in an arrayList:
/*the vertices' coordinates*/
public int coordCount = 0;
/*float array of 3(x,y,z)*/
public ArrayList<Float> coordList = new ArrayList<Float>(coordCount);
/*the coordinates' indexes(if used)*/
/*maximum limit:32767*/
private int orderCount = 0;
private ArrayList<Short> orderList = new ArrayList<Short>(orderCount);
/*textures*/
public boolean textured;
private boolean textureIsReady;
private ArrayList<Float> textureList = new ArrayList<Float>(coordCount);
private Bitmap bitmap; //the image to be displayed
private int textures[]; //the textures' ids
The buffers are initialized in the following function:
/*Drawing is based on the buffers*/
public void refreshBuffers(){
/*Coordinates' List*/
float coords[] = new float[coordList.size()];
for(int i=0;i<coordList.size();i++){
coords[i]= coordList.get(i);
}
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (number of coordinate values * 4 bytes per float)
coords.length * 4);
// use the device hardware's native byte order
bb.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
vertexBuffer = bb.asFloatBuffer();
// add the coordinates to the FloatBuffer
vertexBuffer.put(coords);
// set the buffer to read the first coordinate
vertexBuffer.position(0);
/*Index List*/
short order[] = new short[(short)orderList.size()];
for(int i=0;i<order.length;i++){
order[i] = (short) orderList.get(i);
}
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
order.length * 2);
dlb.order(ByteOrder.nativeOrder());
orderBuffer = dlb.asShortBuffer();
orderBuffer.put(order);
orderBuffer.position(0);
/*texture list*/
if(textured){
float textureCoords[] = new float[textureList.size()];
for(int i=0;i<textureList.size();i++){
textureCoords[i] = textureList.get(i);
}
ByteBuffer byteBuf = ByteBuffer.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
textureBuffer = byteBuf.asFloatBuffer();
textureBuffer.put(textureCoords);
textureBuffer.position(0);
}
}
I load the image into the object with the following code:
public void initTexture(GL10 gl, Bitmap inBitmap){
bitmap = inBitmap;
loadTexture(gl);
textureIsReady = true;
}
/*http://www.jayway.com/2010/12/30/opengl-es-tutorial-for-android-part-vi-textures/*/
public void loadTexture(GL10 gl){
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
/*bind bitmap to texture*/
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
And the drawing happens based on this code:
public void draw(GL10 gl){
if(textured && textureIsReady){
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//loadTexture(gl);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0,
vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0,
textureBuffer);
}else{
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glColor4f(color[0], color[1], color[2], color[3]);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0,
vertexBuffer);
}
if(!indexed)gl.glDrawArrays(drawMode, 0, coordCount);
else gl.glDrawElements(drawMode, orderCount, GL10.GL_UNSIGNED_SHORT, orderBuffer);
if(textured && textureIsReady){
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDisable(GL10.GL_TEXTURE_2D);
}else{
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
}
}
The initialization is as follows:
pic = new VertexObject();
pic.indexed = true;
pic.textured = true;
pic.initTexture(gl,MainActivity.bp);
pic.color[0] = 0.0f;
pic.color[1] = 0.0f;
pic.color[2] = 0.0f;
float inputVertex[] = {2.0f,2.0f,0.0f};
float inputTexture[] = {0.0f,0.0f};
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 2.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 0.0f;
inputTexture[0] = 1.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 8.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 1.0f;
inputTexture[0] = 1.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 8.0f;
inputVertex[1] = 2.0f;
inputTexture[0] = 1.0f;
inputTexture[0] = 0.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
pic.addIndex((short)0);
pic.addIndex((short)1);
pic.addIndex((short)2);
pic.addIndex((short)0);
pic.addIndex((short)2);
pic.addIndex((short)3);
The coordinates are just simply added to the arrayList, and then I refresh the buffers.
The bitmap is valid, because it is showing up on an imageView.
The image is a png file with the size of 128x128 in the drawable folder.
For what I gathered the image is getting to the vertexObject, but something isn't right with the texture mapping. Any pointers on what am I doing wrong?
Okay, I got it!
I downloaded a working example from the internet and rewrote it, to resemble the object(presented above) step by step. I observed if it works on every step. Turns out, the problem isn't in the graphical part, because the object worked in another context with different coordinates.
Long story short:
I got the texture UV mapping wrong!
That's why I got the solid color, the texture was loaded, but the UV mapping wasn't correct.
Short story long:
At the lines
inputVertex[0] = 2.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 0.0f;
inputTexture[0] = 1.0f;
The indexing was wrong as only the first element of inputTexture was updated only. There might have been some additional errors regarding the sizes of the different array describing the vertex coordinates, but rewriting on the linked example fixed the problem, and it produced a mroe concise code.
I am facing problems on loading a texture onto a circle. My circle is made with a triangle fan. It gives a bad output.
Original Image:
The Result :
My code:
public class MyOpenGLCircle {
private int points=360;
private float vertices[]={0.0f,0.0f,0.0f};
private FloatBuffer vertBuff, textureBuffer;
float texData[] = null;
float theta = 0;
int[] textures = new int[1];
int R=1;
float textCoordArray[] =
{
-R, (float) (R * (Math.sqrt(2) + 1)),
-R, -R,
(float) (R * (Math.sqrt(2) + 1)), -R
};
public MyOpenGLCircle(){
vertices = new float[(points+1)*3];
for(int i=0;i<(points)*3;i+=3)
{
//radius is 1/3
vertices[i]=(float) ( Math.cos(theta))/3;
vertices[i+1]=(float) (Math.sin(theta))/3;
vertices[i+2]=0;
theta += Math.PI / 90;
}
ByteBuffer bBuff=ByteBuffer.allocateDirect(vertices.length*4);
bBuff.order(ByteOrder.nativeOrder());
vertBuff=bBuff.asFloatBuffer();
vertBuff.put(vertices);
vertBuff.position(0);
ByteBuffer bBuff2=ByteBuffer.allocateDirect(textCoordArray.length * 4 * 360);
bBuff2.order(ByteOrder.nativeOrder());
textureBuffer=bBuff2.asFloatBuffer();
textureBuffer.put(textCoordArray);
textureBuffer.position(0);
}
public void draw(GL10 gl){
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertBuff);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //4
gl.glTexCoordPointer(2, GL10.GL_FLOAT,0, textureBuffer); //5
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points/2);
}
public void loadBallTexture(GL10 gl, Context context, int resource){
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resource);
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
}
Please help me through this
For starters, you need to have the same number of texcoord pairs in your texcoord array as you have vertex tuples in your vertex array.
It looks like you've just got 3 pairs of texture coordinates, and 360 vertices.
You need to have a texcoord array that has 360 texture coordinates in it. Then when the vertices are drawn, vertex[0] gets texcoord[0], vertex[1] gets paired with texcoord[1], etc.
===EDIT===
You just have to define the texture coordinates in a similar manner to how you define your vertices: in a loop using mathematical formulas.
So for example, your first vertex of the triangle fan is at the center of the circle. For the center of your circle, you want the texcoord to reference the center of the texture, which is coordinate (0.5, 0.5).
As you go around the edges, just think about which texture coordinate maps to that part of the circle. So lets assume that your next vertex is the rightmost vertex of the circle, that lies along the same y value as the center of the circle. The texcoord for this one would be (1.0, 0.5), or the right edge of the texture in the vertical middle.
The top vertex of the circle would have texcoord (0.5, 1.0), the leftmost vertex would be (0.0, 0.5), etc.
You can use your trigonometry to fill in the rest of the vertices.
Does anyone know how to point out a given section of the texture buffer array stored in a HW buffer? I'm drawing a triangle strip and filling it with a square image. In my texture I have two square images next to each other, so the texture coordinate buffer points out them out with a total of 16 floats.
With software buffers I'm doing this to access the second image in the texture:
textureCoordinateBuffer.position(8);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureCoordinateBuffer);
With hardware buffers I assumed I do something like this:
// setup HW buffers
// ...
// select HW buffers
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER,vertexCoordinateBufferIndex);
gl11.glVertexPointer(3, GL10.GL_FLOAT, 0, 0);
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, textureCoordinateBufferIndex);
// Point out the first image in the texture coordinate buffer
gl11.glTexCoordPointer(2, GL11.GL_FLOAT, 0, 0);
// Draw
// ...
Which works nicely if you want to point out the first image in the texture.
But I would like to access the second image - so I assumed I do this in the last line:
// Point out the second image in the texture coordinate buffer - doesn't work!
gl11.glTexCoordPointer(2, GL11.GL_FLOAT, 0, 8);
But this renders a scewed and discolored image.
Anyone who knows how to to this correctly?
You might want to take a look at the NeHe Android Tutorials. They go into this in detail and show you what you need to do.
Specifically, the lesson you are looking for is here:
http://insanitydesign.com/wp/projects/nehe-android-ports/
Lesson 6
You might not be binding and enabling the buffers, here's a snippet from the tutorial:
public void draw(GL10 gl) {
//Bind our only previously generated texture in this case
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//Set the face rotation
gl.glFrontFace(GL10.GL_CCW);
//Enable the vertex and texture state
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
//Draw the vertices as triangles, based on the Index Buffer information
gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
Credit: Insanity Design - http://insanitydesign.com/
Edit:
I see what you're asking. Here's more code that should be able to help you then. If you look into the SpriteMethodTest app for android:
http://apps-for-android.googlecode.com/svn/trunk/SpriteMethodTest
You'll notice that Chris Pruett (The developer of this app) shows you the multitude of ways to draw textures to the screen. Below is the code (I believe) you're looking for.
Grid.java
public void beginDrawingStrips(GL10 gl, boolean useTexture) {
beginDrawing(gl, useTexture);
if (!mUseHardwareBuffers) {
gl.glVertexPointer(3, mCoordinateType, 0, mVertexBuffer);
if (useTexture) {
gl.glTexCoordPointer(2, mCoordinateType, 0, mTexCoordBuffer);
}
} else {
GL11 gl11 = (GL11)gl;
// draw using hardware buffers
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, mVertBufferIndex);
gl11.glVertexPointer(3, mCoordinateType, 0, 0);
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, mTextureCoordBufferIndex);
gl11.glTexCoordPointer(2, mCoordinateType, 0, 0);
gl11.glBindBuffer(GL11.GL_ELEMENT_ARRAY_BUFFER, mIndexBufferIndex);
}
}
// Assumes beginDrawingStrips() has been called before this.
public void drawStrip(GL10 gl, boolean useTexture, int startIndex, int indexCount) {
int count = indexCount;
if (startIndex + indexCount >= mIndexCount) {
count = mIndexCount - startIndex;
}
if (!mUseHardwareBuffers) {
gl.glDrawElements(GL10.GL_TRIANGLES, count,
GL10.GL_UNSIGNED_SHORT, mIndexBuffer.position(startIndex));
} else {
GL11 gl11 = (GL11)gl;
gl11.glDrawElements(GL11.GL_TRIANGLES, count,
GL11.GL_UNSIGNED_SHORT, startIndex * CHAR_SIZE);
}
}
Specifically, you'll want to look at the code where it takes the false branch of !mUseHardwareBuffers. I suggest you look at the full Grid.java file for a better representation of how to do it because he also sets up the texture pointers and enables OpenGL to start drawing.
On a Side Note: I suggest reading this from Chris also:
http://www.scribd.com/doc/16917369/Writing-Real-Time-Games-for-Android
He goes into what this app does and what he found the most effective way of drawing textures was.