I am facing problems on loading a texture onto a circle. My circle is made with a triangle fan. It gives a bad output.
Original Image:
The Result :
My code:
public class MyOpenGLCircle {
private int points=360;
private float vertices[]={0.0f,0.0f,0.0f};
private FloatBuffer vertBuff, textureBuffer;
float texData[] = null;
float theta = 0;
int[] textures = new int[1];
int R=1;
float textCoordArray[] =
{
-R, (float) (R * (Math.sqrt(2) + 1)),
-R, -R,
(float) (R * (Math.sqrt(2) + 1)), -R
};
public MyOpenGLCircle(){
vertices = new float[(points+1)*3];
for(int i=0;i<(points)*3;i+=3)
{
//radius is 1/3
vertices[i]=(float) ( Math.cos(theta))/3;
vertices[i+1]=(float) (Math.sin(theta))/3;
vertices[i+2]=0;
theta += Math.PI / 90;
}
ByteBuffer bBuff=ByteBuffer.allocateDirect(vertices.length*4);
bBuff.order(ByteOrder.nativeOrder());
vertBuff=bBuff.asFloatBuffer();
vertBuff.put(vertices);
vertBuff.position(0);
ByteBuffer bBuff2=ByteBuffer.allocateDirect(textCoordArray.length * 4 * 360);
bBuff2.order(ByteOrder.nativeOrder());
textureBuffer=bBuff2.asFloatBuffer();
textureBuffer.put(textCoordArray);
textureBuffer.position(0);
}
public void draw(GL10 gl){
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertBuff);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //4
gl.glTexCoordPointer(2, GL10.GL_FLOAT,0, textureBuffer); //5
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points/2);
}
public void loadBallTexture(GL10 gl, Context context, int resource){
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resource);
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
}
Please help me through this
For starters, you need to have the same number of texcoord pairs in your texcoord array as you have vertex tuples in your vertex array.
It looks like you've just got 3 pairs of texture coordinates, and 360 vertices.
You need to have a texcoord array that has 360 texture coordinates in it. Then when the vertices are drawn, vertex[0] gets texcoord[0], vertex[1] gets paired with texcoord[1], etc.
===EDIT===
You just have to define the texture coordinates in a similar manner to how you define your vertices: in a loop using mathematical formulas.
So for example, your first vertex of the triangle fan is at the center of the circle. For the center of your circle, you want the texcoord to reference the center of the texture, which is coordinate (0.5, 0.5).
As you go around the edges, just think about which texture coordinate maps to that part of the circle. So lets assume that your next vertex is the rightmost vertex of the circle, that lies along the same y value as the center of the circle. The texcoord for this one would be (1.0, 0.5), or the right edge of the texture in the vertical middle.
The top vertex of the circle would have texcoord (0.5, 1.0), the leftmost vertex would be (0.0, 0.5), etc.
You can use your trigonometry to fill in the rest of the vertices.
Related
I'm currently developing an .obj file loader on Android. I have done the basics, and the 3d mesh is drawn correctly with OpenGl. Unfortunately I have a problem in binding the texture. Let me explain with more details:
The .obj file has the following structure:
v -0.751804 0.447968 -1.430558
v -0.751802 2.392585 -1.428428
... etc list with all the vertices ...
vt 0.033607 0.718905
vt 0.033607 0.718615
... etc list with all the texture coordinates ...
f 237/1 236/2 253/3 252/4
f 236/2 235/5 254/6 253/3
... etc list with all the faces ...
The f lines idicate the index where the appropriate vertex and texture coordinates are stored, like
f vertex_index/texture_coord_index
So my program
parses the vertices and stores them in a Vector<Float>,
parses the texture coordinates and stores them in a Vector<Float>
and finally parses the faces and stores every vertex index in a Vector<Short> and every texture coordinate index in a Vector<Short>
After all this code, I'm creating the appropriate buffers:
public void buildVertexBuffer(){
ByteBuffer vBuf = ByteBuffer.allocateDirect(vertices.size() * 4);
vBuf.order(ByteOrder.nativeOrder());
vertexBuffer = vBuf.asFloatBuffer();
vertexBuffer.put(toFloatArray(vertices));
vertexBuffer.position(0);
}
where vertices is the vector that stores the float vertices
public void buildFaceBuffer(){
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(faces.size() * 2);
byteBuffer.order(ByteOrder.nativeOrder());
faceBuffer = byteBuffer.asShortBuffer();
faceBuffer.put(toShortArray(faces));
faceBuffer.position(0);
}
where faces is the vector that stores the indices and
public void buildTextureBuffer(Vector<Float> textures){
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(texturePointers.size() * 4 * 2);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
for(int i=0; i<texturePointers.size(); i++){
float u = textures.get(texturePointers.get(i) * 2);
float v = textures.get(texturePointers.get(i) * 2 + 1);
textureBuffer.put(u);
textureBuffer.put(v);
}
textureBuffer.position(0);
}
where textures are the float texture coordinates and texturePointers point to the textures' values.
The binding happens here:
public int[] loadTexture(GL10 gl, Context context){
if(textureFile == null)
return null;
int resId = getResourceId(textureFile, R.drawable.class);
if(resId == -1){
Log.d("Bako", "Texture not found...");
return null;
}
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resId);
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
/*gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);*/
/*int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer buffer = ByteBuffer.allocateDirect(size);
bitmap.copyPixelsToBuffer(buffer);
gl.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(), bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_INT, buffer);*/
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
return textures;
}
Finally my draw() method of my mesh looks like this
public void draw(GL10 gl){
if(bindedTextures != null){
gl.glBindTexture(GL10.GL_TEXTURE_2D, bindedTextures[0]);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
}
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
for(int i=0; i<parts.size(); i++){
ModelPart modelPart = parts.get(i);
Material material = modelPart.getMaterial();
if(material != null){
FloatBuffer a = material.getAmbientColorBuffer();
FloatBuffer d = material.getDiffuseColorBuffer();
FloatBuffer s = material.getSpecularColorBuffer();
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, a);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR, s);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, d);
}
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, modelPart.getTextureBuffer()); // returns the texture buffer created with the buildTextureBuffer() method
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
gl.glNormalPointer(GL10.GL_FLOAT, 0, modelPart.getNormalBuffer());
gl.glDrawElements(GL10.GL_TRIANGLES, modelPart.getFacesSize(), GL10.GL_UNSIGNED_SHORT, modelPart.getFaceBuffer());
//gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
//gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
gl.glDisableClientState(GL10.GL_NORMAL_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
}
When i run this application the 3d model is drawn like a charm, but the texture is somehow streched. The image that contains the texture has a red background with the appropriate image in the center and this red background is drawn onto the whole 3d model.
Well my first question is if the textureBuffer built correctly. Do i have to change the code in buildTextureBuffer() ?
And the second one; is the the draw() method correct? Does my problem have to be with the faces buffer?
So, in OpenGL a vertex is whatever combination of information describes a particular point on the model. You're using the old fixed pipeline so a vertex is one or more of a location, some texture coordinates, a normal and a colour.
In OBJ a vt is only a location. The thing that maps to the OpenGL concept of a vertex is each unique combination of — in your case — location + texture coordinate pairs given after an f.
You need a mapping whereby if the f says 56/92 then you can lookup the 56/92 and find out that you consider that to be, say, vertex 23 and have communicated suitable arrays to OpenGL such that the location value in slot 23 was the 56th thing the OBJ gave as a v and the 92nd thing it gave as a vt.
Putting it another way, OBJ files have an extra level of indirection that OpenGL does not.
It looks to me like you're not resolving that difference. A common approach would be to use a HashMap from v/vt pair to output index, building your output arrays on demand as you parse the f.
I want to change position of object in OpenGL,I found this class and i want to write function change.When program call change i want to change positon of object
This class create a square and texture over...And i want to change position in pixels...
public class Square {
private FloatBuffer vertexBuffer; // buffer holding the vertices
private float vertices[] = {
-1.0f, -1.0f, 0.0f, // V1 - bottom left
-1.0f, 1.0f, 0.0f, // V2 - top left
1.0f, -1.0f, 0.0f, // V3 - bottom right
1.0f, 1.0f, 0.0f // V4 - top right
};
private FloatBuffer textureBuffer; // buffer holding the texture coordinates
private float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
private FloatBuffer textureBuffer1; // buffer holding the texture coordinates
private float texture1[] = {
// Mapping coordinates for the vertices
2.0f, 1.0f, // top left (V2)
2.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
/** The texture pointer */
private int[] textures = new int[1];
public Square() {
// a float has 4 bytes so we allocate for each coordinate 4 bytes
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
// allocates the memory from the byte buffer
vertexBuffer = byteBuffer.asFloatBuffer();
// fill the vertexBuffer with the vertices
vertexBuffer.put(vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
byteBuffer = ByteBuffer.allocateDirect(texture1.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer1 = byteBuffer.asFloatBuffer();
textureBuffer1.put(texture1);
textureBuffer1.position(0);
}
/**
* Load the texture for the square
* #param gl
* #param context
*/
public void loadGLTexture(GL10 gl, Context context) {
// loading texture
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),
R.drawable.a);
// generate one texture pointer
gl.glGenTextures(1, textures, 0);
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
/** The draw method for the square with the GL context */
public void draw(GL10 gl) {
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
// Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
}
OpenGL is a drawing API not a scene graph. In OpenGL there are no models, objects or scene. There are only points, lines and triangles, drawn to a pixel based framebuffer. What this means is, that every change in your scene must be complemented by a full redraw of the scene.
So if you want to move something on the screen, you change the value variable(s) controlling the position and do a full redraw.
I have RGB buffer that I draw on texture which placed to GLSurfaceView.
The size of RGB image is the same as the size of GLSurfaceView.
Image of size 1024x600 (16:9, full screen) is drawn correctly. But image of size 800x600 (4:3) is drawn with horizontal lines as here but without additional columns.
Here is the code how I draw images:
SurfaceView code
public void onSurfaceChanged(GL10 gl, int width, int height) {
if(height == 0) { //Prevent A Divide By Zero By
height = 1; //Making Height Equal One
}
currHalfWidth = width/2;
currHalfHeight = height/2;
cameraDist = (float)(currHalfHeight / Math.tan(Math.toRadians(45.0f / 2.0f)));
gl.glViewport(0, 0, width, height); //Reset The Current Viewport
gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Projection Matrix
gl.glLoadIdentity(); //Reset The Projection Matrix
//Calculate The Aspect Ratio Of The Window
GLU.gluPerspective(gl, 45.0f, (float)width / (float)height, 0.1f, 2000.0f);
//GLU.gluPerspective(gl, 0.0f, (float)width / (float)height, 0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW); //Select The Modelview Matrix
gl.glLoadIdentity(); //Reset The Modelview Matrix
cameraPreview.setSurfaceSize(width, height);
}
CameraPreview code
public void setSurfaceSize(int width, int height)
{
surfaceWidth = width;
surfaceHeight = height;
out = new byte[width*height*2];
vertexBuffer.clear();
vertices = new float[]{
-height/2.0f, -width/2.0f, 0.0f, //Bottom Left
height/2.0f, -width/2.0f, 0.0f, //Bottom Right
-height/2.0f, width/2.0f, 0.0f, //Top Left
height/2.0f, width/2.0f, 0.0f //Top Right
};
vertexBuffer.put(vertices);
vertexBuffer.position(0);
}
public void draw(GL10 gl, byte[] yuv_data, Context context) {
if(yuv_data != null && context != null)
this.loadGLTexture(gl, yuv_data, context);
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
// Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
public void loadGLTexture(GL10 gl, byte[] yuv_data, Context context) {
Camera.Parameters params = MainScreen.getCameraParameters();
int imageWidth = params.getPreviewSize().width;
int imageHeight = params.getPreviewSize().height;
textureWidth = 512;
textureHeight = 512;
NativeConverter.convertPreview(yuv_data, out, imageWidth, imageHeight, textureWidth, textureHeight);
//...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//Create Nearest Filtered Texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_NEAREST);
//Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGB, textureWidth, textureHeight, 0, GL10.GL_RGB, GL10.GL_UNSIGNED_BYTE, ByteBuffer.wrap(out));
}
UPD: Problems solved! It wasn't openGL issue, but input 'yuv_data' buffer was already corrupted.
Can't be completely certain without knowing what NativeConverter is doing, but this sounds very much like a pixel unpack alignment issue.
By default OpenGL expects all texture rows to be a multiple of 4 bytes. You can get into problems when you use byte-per-pixel values other than 4 with non-power-of-two image sizes, both of which you seem to be using.
If you don't want to align your texture rows to 4 byte boundaries, then you tell OpenGL that they are not aligned to 4-byte boundaries by setting
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); before uploading your texture.
I contact because, I try to use openGL with android, in order to make a 2D game :)
Here is my way of working:
I have a class GlRender
public class GlRenderer implements Renderer
In this class, on onDrawFrame I do
GameRender() and GameDisplay()
And on gameDisplay() I have:
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Reset the Modelview Matrix
gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Modelview Matrix
gl.glLoadIdentity(); //Reset The Modelview Matrix
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
for(Sprites...)
{
sprite.draw(gl, att.getX(), att.getY());
}
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
And in the draw method of sprite I have:
_vertices[0] = x;
_vertices[1] = y;
_vertices[3] = x;
_vertices[4] = y + height;
_vertices[6] = x + width;
_vertices[7] = y;
_vertices[9] = x + width;
_vertices[10] = y + height;
if(vertexBuffer != null)
{
vertexBuffer.clear();
}
// fill the vertexBuffer with the vertices
vertexBuffer.put(_vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer.mByteBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer.mByteBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, _vertices.length / 3);
My problem is that I have a low frame rate, even at 30 FPS I loose some frame sometimes with only 1 sprite (but it is the same with 50)
Am I doing something wrong? How can I improve FPS?
In general, you should not be changing your vertex buffer for every sprite drawn. And by "in general", I pretty much mean "never," unless you're making a particle system. And even then, you would use proper streaming techniques, not write a quad at a time.
For each sprite, you have a pre-built quad. To render it, you use shader uniforms to transform the sprite from a neutral position to the actual position you want to see it on screen.
I'm new to OpenGL and I'm developing an Augmented-reality application for Android.
Until now, I was drawing white squares, perpendicular to the camera, pointing the user to the direction where a "Point of interest" would be.
Now, I'm trying to show some text into the squares.
I've read a lot and it seems that creating a texture with the text is the most direct and easiest approach, so I'm creating the textures as soon as I get data of the Points of interest and stick them to their squares. For that, I use bitmaps.
Let's see some code. Within my onDrawFrame method, I do something like this:
public void onDrawFrame(GL10 gl) {
// Get sensors matrix
...
//Clear Screen And Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Load remapped matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// List of Points of interest (modified when some data is downloaded)
synchronized (poiList) {
if(mNewData){ // True if new data has been dowloaded)
if(textures != null) // Delete old textures
gl.glDeleteTextures(textures.length, textures, 0);
textures = loadGLTexture(gl, soapPoiList.getPoiList());
mNewData = false;
}
int i = 0;
// Iterate the list
for (PointOfInterest c : poiList) {
gl.glLoadIdentity();
gl.glLoadMatrixf(remappedRotationMatrix, 0);
// Get bearing
...
// Place polygon in the right place
gl.glRotatef(-bearing, 0, 0, 1);
gl.glTranslatef(0, ICONS_SIZE_RATIO, 0);
// Actually draws the polygon with text
c.draw(gl, textures[i]);
i++;
}
}
}
Where loadGLTextures is:
protected int[] loadGLTexture(GL10 gl, List<PointOfInterest> l){
int res[] = new int[l.size()];
gl.glGenTextures(res.length, res, 0);
int i = 0;
for(PointOfInterest p : l) {
Bitmap bitmap = Bitmap.createBitmap(256, 256, Bitmap.Config.RGB_565);
bitmap.eraseColor(Color.BLACK);
Canvas canvas = new Canvas(bitmap);
Paint textPaint = new Paint();
textPaint.setTextSize(35);
textPaint.setFakeBoldText(true);
textPaint.setAntiAlias(true);
textPaint.setARGB(255, 255, 255, 255);
// Draw the text centered
canvas.drawText(Float.toString(p.getDinstanceToOrigin()) + " m.", 10,35, textPaint);
// Bind the texture to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, res[i]);
// Create Nearest Filtered Texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
i++;
}
return res;
}
It basically creates a bitmap for each Point of Interest, and generate a texture with it. The texture will be applied over a white square later, as it is shown in this class:
public class PointOfInterest {
// MEMBERS ----------------------------------------------------------------
....
....
// OpenGL necessary variables
/** The buffer holding the vertices */
private FloatBuffer vertexBuffer;
/** The initial vertex definition */
private float vertices[] = {
-1.0f, 0.0f, -1.0f, //Bottom Left V1
-1.0f, 0.0f, 1.0f, //Top Left V2
1.0f, 0.0f, -1.0f, //Bottom Right V3
1.0f, 0.0f, 1.0f, //Top Right V4
};
private FloatBuffer textureBuffer;
private float texture[] = {
0.0f, 0.0f, // V1
1.0f, 0.0f, // V3
0.0f, 1.0f, // V2
1.0f, 1.0f // V4
};
// CONSTRUCTORS -----------------------------------------------------------
public PointOfInterest(Location originLocation){
currentLocation = originLocation;
mPoiLocation = new Location(LocationManager.GPS_PROVIDER);
ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuf.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
byteBuf = ByteBuffer.allocateDirect(texture.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
textureBuffer = byteBuf.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
}
// PUBLIC METHODS ---------------------------------------------------------
public void draw(GL10 gl, int textureId){
// Bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// set the colour for the square
//gl.glColor4f(0.0f, 0.1f, 0.0f, 0.5f);
//Set the face rotation
gl.glFrontFace(GL10.GL_CW);
//Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
....
....
}
I have tried to map the texture as it is taught here and here with no success. I really don't know what to do to get some letters drawn on the squares and am really lost here... Maybe the text is being drawn on the other face of the square, maybe the textures are not being generated... I don't know.
Any kind of help would be very appreciated.
Okay, I had forgotten to enable texture mapping. You can do that within any method that uses the GL10 object. I prefer to do it within the draw method of my objects, so any other object is not affected by the texture. It's as simple as this (just changed 2 lines, the ones that say NEW!!):
public void draw(GL10 gl, int textureId){
gl.glEnable(GL10.GL_TEXTURE_2D); //NEW !!! Enable Texture Mapping
// Bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// set the colour for the square
//gl.glColor4f(0.0f, 0.1f, 0.0f, 0.5f);
//Set the face rotation
gl.glFrontFace(GL10.GL_CW);
//Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDisable(GL10.GL_TEXTURE_2D); ////NEW !!! Disable texture mapping
}