I have this code:
public Scene onLoadScene() {
Random randomGenerator = new Random();
pX = randomGenerator.nextInt(CAMERA_WIDTH);
Sprite snow = new Sprite (pX, 1, 30, 30, mTextureSnowRegion);
scene.getLastChild().attachChild(snow);
return scene;
}
I am trying to make a snowfall. I was trying to use a MoveModifier, but nothing is work.
Please help.
I would suggest using a particle system in AndEngine: http://code.google.com/p/andengineexamples/source/browse/src/org/anddev/andengine/examples/ParticleSystemSimpleExample.java
public Scene onLoadScene() {
Random randomGenerator = new Random();
pX = randomGenerator.nextInt(CAMERA_WIDTH);
Sprite snow = new Sprite (pX, 1, 30, 30, mTextureSnowRegion);
scene.getLastChild().attachChild(snow);
return scene;
This looks good. You just need to add MoveYModifer instead of MoveModifier.
And also you should use an GenericPool in AndEngine, because it uses alot of memory to keep creating new Sprite instances and attaching it. Also dont forget to detach it when the sprite is gone off screen.
Check out this
Using this particle code in my game to create snow. My game uses a 800x480 camera.
final RectangleParticleEmitter particleEmitter = new RectangleParticleEmitter(184.0f,44.0f,340,60);
final ParticleSystem particleSystem = new ParticleSystem(particleEmitter, 100, 200, 360, this.mParticleTextureRegion);
particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1));
particleSystem.addParticleInitializer(new AlphaInitializer(0));
particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE);
particleSystem.addParticleInitializer(new VelocityInitializer(-200, 200, -200, 200));
particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 360.0f));
particleSystem.addParticleModifier(new ScaleModifier(1.0f, 1.2f, 0, 5));
particleSystem.addParticleModifier(new ColorModifier(1, 0.98f, 1, 0.96f, 1, 0.82f, 0, 3));
particleSystem.addParticleModifier(new ColorModifier(1, 1, 0.5f, 1, 0, 1, 4, 6));
particleSystem.addParticleModifier(new org.anddev.andengine.entity.particle.modifier.AlphaModifier(0, 1, 0, 1));
particleSystem.addParticleModifier(new org.anddev.andengine.entity.particle.modifier.AlphaModifier(1, 0, 5, 6));
particleSystem.addParticleModifier(new ExpireModifier(3, 6));
I am using similar particle system settins as #UncleIstvan.
final BatchedPseudoSpriteParticleSystem particleSystem = new BatchedPseudoSpriteParticleSystem(
new RectangleParticleEmitter(CAMERA_WIDTH / 2, CAMERA_HEIGHT, CAMERA_WIDTH, 1),
2, 5, 100, mSnowParticleRegion,
this.getVertexBufferObjectManager()
);
particleSystem.setBlendFunction(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE);
particleSystem.addParticleInitializer(new VelocityParticleInitializer<Entity>(-3, 3, -20, -40));
particleSystem.addParticleInitializer(new AccelerationParticleInitializer<Entity>(-3, 3, -3, -5));
particleSystem.addParticleInitializer(new RotationParticleInitializer<Entity>(0.0f, 360.0f));
particleSystem.addParticleInitializer(new ExpireParticleInitializer<Entity>(10f));
particleSystem.addParticleInitializer(new ScaleParticleInitializer<Entity>(0.2f, 0.5f));
particleSystem.addParticleModifier(new AlphaParticleModifier<Entity>(6f, 10f, 1.0f, 0.0f));
scene.attachChild(particleSystem);
But I added an entity modifier to each particle:
particleSystem.addParticleInitializer(new RegisterXSwingEntityModifierInitializer<Entity>(10f, 0f, (float) Math.PI * 8, 3f, 25f, true));
It needs a custom particle initializer. In the initializer I register a new modifier to each particle:
#Override
public void onInitializeParticle(Particle<T> pParticle) {
pParticle.getEntity().registerEntityModifier(
new PositionXSwingModifier(mDuration,
mFromValue, mToValue,
mFromMagnitude, mToMagnitude));
}
And the last part is the modifier that uses growing sine wave to create the swinging motion (some parts ommited):
public class PositionXSwingModifier extends SingleValueSpanEntityModifier {
public PositionXSwingModifier(float pDuration, float pFromValue, float pToValue,
float pFromMagnitude, float pToMagnitude) {
// fromValue is usually 0
// toValue means how many times will the sine wave oscillate
// every 2pi is full sin wave
super(pDuration, pFromValue, pToValue);
mFromMagnitude = pFromMagnitude;
mToMagnitude = pToMagnitude;
}
#Override
protected void onSetValue(IEntity pItem, float pPercentageDone, float pValue) {
// current magnitude based on percentage
float currentMagnitude = mFromMagnitude + (mToMagnitude - mFromMagnitude) * pPercentageDone;
// current sine wave value
float currentSinValue = (float) Math.sin(pValue);
// change the x position of the flake
pItem.setX(mInitialX + currentMagnitude * currentSinValue);
}
}
It's based partly on my question here: https://gamedev.stackexchange.com/questions/56475/how-to-simulate-feather-fall-in-box2d
And you can get the full code and the APK to try it out here.
Related
I'm trying to build an Augmented Reality application in Android using BoofCV (OpenCV alternative for Java) and OpenGL ES 2.0. I have a marker which I can get the image points of and "world to cam" transformation using BoofCV's solvePnP function. I want to be able to draw the marker in 3D using OpenGL. Here's what I have so far:
On every frame of the camera, I call solvePnP
Se3_F64 worldToCam = MathUtils.worldToCam(__qrWorldPoints, imagePoints);
mGLAssetSurfaceView.setWorldToCam(worldToCam);
This is what I have defined as the world points
static float qrSideLength = 79.365f; // mm
private static final double[][] __qrWorldPoints = {
{qrSideLength * -0.5, qrSideLength * 0.5, 0},
{qrSideLength * -0.5, qrSideLength * -0.5, 0},
{qrSideLength * 0.5, qrSideLength * -0.5, 0},
{qrSideLength * 0.5, qrSideLength * 0.5, 0}
};
I'm feeding it a square that has origin at its center, with a sidelength in millimeters.
I can confirm that the rotation vector and translation vector I'm getting back from solvePnP are reasonable, so I don't know if there's a problem here.
I pass the result from solvePnP into my renderer
public void setWorldToCam(Se3_F64 worldToCam) {
DenseMatrix64F _R = worldToCam.R;
Vector3D_F64 _T = worldToCam.T;
// Concatenating the the rotation and translation vector into
// a View matrix
double[][] __view = {
{_R.get(0, 0), _R.get(0, 1), _R.get(0, 2), _T.getX()},
{_R.get(1, 0), _R.get(1, 1), _R.get(1, 2), _T.getY()},
{_R.get(2, 0), _R.get(2, 1), _R.get(2, 2), _T.getZ()},
{0, 0, 0, 1}
};
DenseMatrix64F _view = new DenseMatrix64F(__view);
// Matrix to convert from BoofCV (OpenCV) coordinate system to OpenGL coordinate system
double[][] __cv_to_gl = {
{1, 0, 0, 0},
{0, -1, 0, 0},
{0, -1, 0, 0},
{0, 0, 0, 1}
};
DenseMatrix64F _cv_to_gl = new DenseMatrix64F(__cv_to_gl);
// Multiply the View Matrix by the BoofCV to OpenGL matrix to apply the coordinate transform
DenseMatrix64F view = new SimpleMatrix(__view).mult(new SimpleMatrix(__cv_to_gl)).getMatrix();
// BoofCV stores matrices in row major order, but OpenGL likes column major order
// I transpose the view matrix and get a flattened list of 16,
// Then I convert them to floating point
double[] viewd = new SimpleMatrix(view).transpose().getMatrix().getData();
for (int i = 0; i < mViewMatrix.length; i++) {
mViewMatrix[i] = (float) viewd[i];
}
}
I'm also using the camera intrinsics I get from camera calibration to feed into the projection matrix of OpenGL
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
double fx = MathUtils.fx;
double fy = MathUtils.fy;
float fovy = (float) (2 * Math.atan(0.5 * height / fy) * 180 / Math.PI);
float aspect = (float) ((width * fy) / (height * fx));
// be careful with this, it could explain why you don't see certain objects
float near = 0.1f;
float far = 100.0f;
Matrix.perspectiveM(mProjectionMatrix, 0, fovy, aspect, near, far);
GLES20.glViewport(0, 0, width, height);
}
The square I'm drawing is the one defined in this Google example.
#Override
public void onDrawFrame(GL10 gl) {
// redraw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
// Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Combine the rotation matrix with the projection and camera view
// Note that the mMVPMatrix factor *must be the first* in order
// for matrix multiplication product to be correct
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
// Draw shape
mSquare.draw(mMVPMatrix);
}
I believe the problem has to do with the fact that this definition of a square in Google's example code doesn't take the real world side length into account. I understand that the OpenGL coordinate system has the corners (-1, 1), (-1, -1), (-1, 1), (1, 1) which doesn't correspond to the millimeter object points I have defined for use in BoofCV, even though they are in the right order.
static float squareCoords[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
I'm trying to learn OpenGL ES 2.0 and I went to load 3d models on Android. I can now load properly with the model texture, but I have a problem on the display depth. When I place my model in perspective, and part of the model is hidden by another part of it, it happens to me that a triangle or two before another draw and this is what I see through some parts .
I try setEGLConfigChooser (8, 8, 8, 8, 16, 0); and (8, 8, 8, 8, 24, 0), but my problem remains the same, except that when I put (8, 8, 8, 8, 24, 0) and display a little better defined, but when the 3d object moves, the colors make a strobe effect that is disturbing to me.
I also try glDepthFunc function (GL_LEQUAL); with glEnable (GL_DEPTH_TEST), but this does not rule over my problem.
Here's the pictures of the probleme:
The probleme : Link is broken
The good : Link is broken
Sorry for my link picture, I do not have more than 10 reputation to post picture in the question.
Here my code
My GLSurfaceView
public MyGLSurfaceView(Context context) {
super(context);
this.context = context;
setEGLContextClientVersion(2);
setEGLConfigChooser(true);
//setZOrderOnTop(true);
//setEGLConfigChooser(8, 8, 8, 8, 16, 0);
//setEGLConfigChooser(8, 8, 8, 8, 24, 0);
//getHolder().setFormat(PixelFormat.RGBA_8888);
mRenderer = new Renderer(context);
setRenderer(mRenderer);
}
My renderer
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
glEnable(GL_DEPTH_TEST);
mushroom = new Mushroom();
textureProgram = new TextureShaderProgram(context);
texture = TextureHelper.loadTexture(context, R.drawable.mushroom);
}
#Override
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
glViewport(0, 0, width, height);
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0f, 10f);
setLookAtM(viewMatrix, 0, 0f, 1.2f, -10.2f, 0f, 0f, 0f, 0f, 1f, 0f);
}
#Override
public void onDrawFrame(GL10 glUnused) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
multiplyMM(viewProjectionMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
glDepthFunc(GL_LEQUAL);
//glDepthMask(true);
positionMushroomInScene();
textureProgram.useProgram();
textureProgram.setUniforms(modelViewProjectionMatrix, texture);
mushroom.bindData(textureProgram);
mushroom.draw();
//glDepthFunc(GL_LESS);
}
private void positionMushroomInScene() {
setIdentityM(modelMatrix, 0);
translateM(modelMatrix, 0, 0f, 0f, 5f);
rotateM(modelMatrix, 0, -yRotation, 1f, 0f, 0f);
rotateM(modelMatrix, 0, xRotation, 0f, 1f, 0f);
multiplyMM(modelViewProjectionMatrix, 0, viewProjectionMatrix,
0, modelMatrix, 0);
}
My matrix Helper
public static void perspectiveM(float[] m, float yFovInDegrees, float aspect, float n, float f) {
final float angleInRadians = (float) (yFovInDegrees * Math.PI / 180.0);
final float a = (float) (1.0 / Math.tan(angleInRadians / 2.0));
m[0] = a / aspect;
m[1] = 0f;
m[2] = 0f;
m[3] = 0f;
m[4] = 0f;
m[5] = a;
m[6] = 0f;
m[7] = 0f;
m[8] = 0f;
m[9] = 0f;
m[10] = -((f + n) / (f - n));
m[11] = -1f;
m[12] = 0f;
m[13] = 0f;
m[14] = -((2f * f * n) / (f - n));
m[15] = 0f;
}
The problem is most likely with the way you set up your projection matrix:
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0f, 10f);
The 4th argument in your definition of this function is the near plane. This value should never be 0.0. It should typically be a reasonable fraction of the far distance. Choosing the ideal value can be somewhat of a tradeoff. The larger far / near is, the less depth precision you get. On the other hand, if you set the near value too large, you risk clipping off close geometry that you actually wanted to see.
A ratio of maybe 100 or 1000 for far / near should normally give you reasonable depth precision, without undesirable front clipping. You'll need to be a little more conservative with the ratio if you use a 16-bit depth buffer than if you have a 24-bit depth buffer.
For your purpose, try changing near to 0.1, and see how that works for you:
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0.1f, 10f);
I've been trying to illuminate plane meshes generated by the following:
private Model createPlane(float w, float h, Texture texture) {
Mesh mesh = new Mesh(true, 4, 6, new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord0"),
new VertexAttribute(Usage.Normal, 3, "a_normal"));
float w2 = w * this.CELL_SIZE;
float h2 = h * this.CELL_SIZE;
mesh.setVertices(new float[]
{ w2, 0f, h2, 0, 0, 0, 1, 0,
w2, 0f, -h2, 0, h, 0, 1, 0,
-w2, 0f, h2, w, 0, 0, 1, 0,
-w2, 0f, -h2 , w,h, 0, 1, 0
});
mesh.setIndices(new short[] { 0, 1, 2, 1, 3, 2});
Model model = ModelBuilder.createFromMesh(mesh, GL10.GL_TRIANGLES, new Material(TextureAttribute.createDiffuse(texture)));
return model;
}
and are rendered using:
//the environment setup
env = new Environment();
env.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f));
env.add(new PointLight().set(Color.ORANGE, 5f, 1f, 5f, 10f));
env.add(new DirectionalLight().set(Color.WHITE, -1f, -0.8f, -0.2f));
...
//the render method
batch.begin();
batch.render(inst, env);//inst is a ModelInstance created using the Model generated from createPlane(...)
batch.end();
The meshes display correctly (UVs, textured) and seem to be properly affected by directional and ambient lighting.
When I try to add a point light (see above environment) none of the planes generated from createPlane(...) are affected. I've tried creating another bit of geometry using the ModelBuilder class's createBox(...) and it seems to properly respond to the point light. Because of that I'm assuming that I'm not generating the plane correctly, but the fact that it's apparently being affected by directional/ambient light is throwing me off a bit.
It's worth noting that the size of the planes generated vary, I'm not particularly sure if a point light would affect 4 vertices very much but I expected more than nothing. Moving the point light around (closer to certain vertices) doesn't do anything either.
Any help would be greatly appreciated.
Thanks!
It would be great to know which shader you are using. The default one? I am not sure if they fixed that already, they had a bug a while ago, where pointlightning was only working on some devices (this had something todo with the implementation of opengles by the manufacturer. I personally fixed this by using my own shader.
Edit: This seems to be fixed
I checked the code I am using. The problem was to determine the correct light array in the shader.
Did it like this in the end:
// on some devices this was working
int u_lightPosition = program.getUniformLocation("u_lightPosition[0]");
int u_lightColors = program.getUniformLocation("u_lightColor[0]");
if(u_lightPosition < 0 && u_lightColors < 0) {
// on others this was working
u_lightPosition = program.getUniformLocation("u_lightPosition");
u_lightColors = program.getUniformLocation("u_lightColor");
}
I hope this helps!
consider a Car Object that should move forward in a road . I dont a have car object yet but I'll add that shape later . Instead of a car I have a square now how can I move it forward with specific speed?
any ideas?
here is my code
public class GLqueue {
private float vertices[] = { 1, 1, -1, // topfront
1, -1, -1, // bofrontright
-1, -1, -1, // botfrontleft
-1, 1, -1,
1, 1, 1,
1, -1, 1,
-1, -1, 1,
-1, 1, 1,
};
private FloatBuffer vertBuff;
private short[] pIndex = { 3, 4, 0, 0, 4, 1, 3, 0, 1,
3, 7, 4, 7, 6, 4, 7, 3, 6,
3, 1, 2, 1, 6, 2, 6, 3, 2,
1, 4, 5, 5, 6, 1, 6, 5, 4,
};
private ShortBuffer pBuff;
public GLqueue() {
ByteBuffer bBuff = ByteBuffer.allocateDirect(vertices.length * 4);
bBuff.order(ByteOrder.nativeOrder());
vertBuff = bBuff.asFloatBuffer();
vertBuff.put(vertices);
vertBuff.position(0);
ByteBuffer pbBuff = ByteBuffer.allocateDirect(pIndex.length * 4);
pbBuff.order(ByteOrder.nativeOrder());
pBuff = pbBuff.asShortBuffer();
pBuff.put(pIndex);
pBuff.position(0);
}
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CW);
gl.glEnable(GL10.GL_CULL_FACE);
gl.glCullFace(GL10.GL_BACK);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertBuff);
gl.glDrawElements(GL10.GL_TRIANGLES, pIndex.length,
GL10.GL_UNSIGNED_SHORT, pBuff);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisable(GL10.GL_CULL_FACE);
}
}
Velocity depends on two factors - time and distance: v = d/t.
'Moving' an object is usually done my changing the position relative to the starting position. This distance is calculated according to the above formula: d = vt.
This means that in order to know where to position our object when drawing we must know the velocity and the time.
The velocity is probably decided by the user or the program somehow (I.E the user pushes the button for driving faster and the velocity goes up).
The current time can be retrieved by calling System.currentTimeMillis().
Here is a very simple example of an implementation:
//Variables:
long time1, time2, dt;
float velocity;
float direction;
//In game loop:
time1 = System.currentTimeMillis(); dt = time1-time2;
float ds = velocity*dt; //ds is the difference in position since last frame. car.updatePosition(ds, direction); //Some method that translates the car by ds in direction.
time2 = time1;
I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience.
I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that.
As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p).
So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on :
private char[][] map = {
{0, 0, 20, 11, 11, 11, 11, 4, 0, 0},
{0, 20, 16, 12, 12, 12, 12, 7, 4, 0},
{20, 16, 17, 13, 13, 13, 13, 9, 7, 4},
{21, 24, 18, 14, 14, 14, 14, 8, 5, 1},
{21, 22, 25, 15, 15, 15, 15, 6, 2, 1},
{21, 22, 23, 0, 0, 0, 0, 3, 2, 1},
{21, 22, 23, 0, 0, 0, 0, 3, 2, 1},
{26, 0, 0, 0, 0, 0, 0, 3, 2, 1},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 1},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 1}
};
It's working but I'm no happy with it, I'm sure there is a beter way to do those things :
1) Loading Textures :
I create an ugly looking array containing the tiles I want to use on that map :
private int[] textures = {
R.drawable.herbe, //0
R.drawable.murdroite_haut, //1
R.drawable.murdroite_milieu, //2
R.drawable.murdroite_bas, //3
R.drawable.angledroitehaut_haut, //4
R.drawable.angledroitehaut_milieu, //5
};
(I cutted this on purpose, I currently load 27 tiles)
All of theses are stored in the drawable folder, each one is a 16*16 tile.
I then use this array to generate the textures and store them in a HashMap for a later use :
int[] tmp_tex = new int[textures.length];
gl.glGenTextures(textures.length, tmp_tex, 0);
texturesgen = tmp_tex; //Store the generated names in texturesgen
for(int i=0; i < textures.length; i++)
{
//Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]);
InputStream is = context.getResources().openRawResource(textures[i]);
Bitmap bitmap = null;
try {
//BitmapFactory is an Android graphics utility for images
bitmap = BitmapFactory.decodeStream(is);
} finally {
//Always clear and close
try {
is.close();
is = null;
} catch (IOException e) {
}
}
// Get a new texture name
// Load it up
this.textureMap.put(new Integer(textures[i]),new Integer(i));
int tex = tmp_tex[i];
gl.glBindTexture(GL10.GL_TEXTURE_2D, tex);
//Create Nearest Filtered Texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
//Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
//Use the Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears.
2) Drawing the tiles
What I did was create a single square and a single texture map :
/** The initial vertex definition */
private float vertices[] = {
-1.0f, -1.0f, 0.0f, //Bottom Left
1.0f, -1.0f, 0.0f, //Bottom Right
-1.0f, 1.0f, 0.0f, //Top Left
1.0f, 1.0f, 0.0f //Top Right
};
private float texture[] = {
//Mapping coordinates for the vertices
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f
};
Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) :
for(int y = 0; y < Y; y++){
for(int x = 0; x < X; x++){
tile = map[y][x];
try
{
//Get the texture from the HashMap
int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue();
gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]);
}
catch(Exception e)
{
return;
}
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile
}
gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line
}
This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ).
I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away).
I think that's it, I've putted a lot of code but I think it helps.
Thanks in advance !
A couple of things:
There's no need to use a hashmap, just use a vector/list.
It may be faster/easier to have one large texture that contains all your tiles. Use appropriate texture coordinates to select the appropriate tile. You might have to be a little bit careful about texture filtering here. It sounds like you are doing a 2D game in which case you probably want to use nearest-neighbour filtering for the tiles and clamp the camera to integer pixel locations.
Wouldn't it be easier to use GL_QUADS rather than GL_TRIANGLE_STRIP. Not sure about your code there - you don't seem to use the 'texture' array.
The map size shouldn't make any difference, as long as you don't draw tiles that aren't on the screen. Your code should be something like:
.
int minX = screenLeft / tileSize;
int minY = screenBottom / tileSize;
int maxX = screenRight / tileSize;
int maxY = screenTop / tilesSize;
for (int x = minX; x <= maxX; ++x)
{
for (int y = minY; y < maxY; ++y)
{
...