Do OpenGL Point Sprites work in Android? - android

I'm developing on a Droid, version 2.1-update1. My supported GL extensions include GL_OES_point_sprite and GL_OES_point_size_array.
I am unable to get point sprites to render. The code below throws UnsupportedOperationException from GLWrapperBase at the glTexEnvi call. If I disable textures and comment out the glTexEnvi all, it throws the same exception further down, at glPointSizePointerOES().
Are point sprites properly supported in Android? Has anyone gotten them working? Or is there an issue with my code below?
// Note that gl is cast to GL11
gl.glEnable(GL11.GL_TEXTURE_2D);
gl.glEnable(GL11.GL_BLEND);
gl.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
gl.glDepthMask(false);
gl.glEnable(GL11.GL_POINT_SPRITE_OES);
gl.glTexEnvi( GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE );
gl.glEnableClientState(GL11.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL11.GL_SHORT, 0, .vertBuffer);
gl.glEnableClientState(GL11.GL_POINT_SIZE_ARRAY_OES);
gl.glPointSizePointerOES(GL11.GL_FLOAT, 0, pointSizeBuffer);
Thanks

I got this working, here is my draw function
Initialize everything
gl.glEnable(GL10.GL_TEXTURE);
TextureManager.activateTexture(gl, R.drawable.water1); //Don't look for this, it's not public api, just looks upd texture id for android resource if loaded, and then activates it. it's the gl.glBindTexture() call replacement
gl.glEnable(GL11.GL_POINT_SPRITE_OES);
gl.glEnableClientState(GL11.GL_POINT_SIZE_ARRAY_BUFFER_BINDING_OES);
gl.glEnableClientState(GL11.GL_POINT_SIZE_ARRAY_OES);
gl.glEnableClientState(GL11.GL_POINT_SPRITE_OES);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
Set the texture environment up to use point sprites
gl.glTexEnvf(GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE);
Set up pointers to the data (First array is 2d laid out [x,y,x2,y2,...] second is 1d [s1,s2,..])
gl.glVertexPointer(2,GL11.GL_FLOAT,0,PosData);
((GL11)(gl)).glPointSizePointerOES(GL10.GL_FLOAT, 0, SizeData);
Draw
gl.glDrawArrays(GL10.GL_POINTS,0,MAX);
Disable stuff
gl.glDisableClientState(GL11.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL11.GL_POINT_SIZE_ARRAY_OES);
gl.glDisableClientState(GL11.GL_POINT_SIZE_ARRAY_BUFFER_BINDING_OES);
gl.glDisableClientState(GL11.GL_POINT_SIZE_ARRAY_OES);
gl.glDisable(GL10.GL_TEXTURE);
In my initializer I only have my projection setup and GL_BLEND enabled for blending. I think you would need GL_COLOR_MATERIAL if you wanted to color your sprite.

I got the point sprites working with ES 1.1 & 2 on a nexus one. I use a fixed point size so I didn´t have to use a size buffer but you can use my code to first get it working and then add the size buffer.
In my draw method:
gl.glEnable(GL11.GL_POINT_SPRITE_OES);
gl.glTexEnvf(GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// 2 dimensional array, (x1,y1, x2, y2, ...).
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, mVerticesBuffer);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
gl.glPointSize(32); // Fixed point size for all points
// This only worked with GLES11 & GLES20.
GLES11.glDrawArrays(GLES11.GL_POINTS, 0, vertices.length);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisable(GL10.GL_TEXTURE_2D);
gl.glDisable(GL11.GL_POINT_SPRITE_OES);

if like me you're using MatrixTrackingGL you need to use glTexEnvf rather than glTexEnvi (f not i at the end) and you need to go into MatrixTrackingGL and change glPointSizePointerOES:
public void glPointSizePointerOES(int type, int stride, Buffer pointer) {
mgl11.glPointSizePointerOES(type, stride, pointer);
//throw new UnsupportedOperationException();
}
I'm sure there is a good reason why it is unsupported in the first place but I don't know it and it works for me on a ZTE Blade running android 2.1
For anyone wondering, MatrixTrackerGL comes from C:\Program Files\android-sdk-windows\samples\android-7\ApiDemos\src\com\example\android\apis\graphics\spritetext
It is used when setting up your GLSurface View:
// GraphicsRenderer is my implementation of Renderer
Graphics Renderer graphicsRenderer = new GraphicsRenderer(this);
GLSurfaceView mGLView = (GLSurfaceView) findViewById(R.id.graphics_glsurfaceview1);
mGLView.setGLWrapper(new GLSurfaceView.GLWrapper() {
public GL wrap(GL gl) {
return new MatrixTrackingGL(gl);
}});
mGLView.setEGLConfigChooser(true);
mGLView.setRenderer(graphicsRenderer);
and means you can use GLU.gluUnProject to do picking!:
MatrixGrabber matrixGrabber = new MatrixGrabber();
matrixGrabber.getCurrentModelView(gl);
matrixGrabber.getCurrentProjection(gl);
float[] vector = new float[4];
GLU.gluUnProject(x, y, 0f, matrixGrabber.mModelView, 0, matrixGrabber.mProjection, 0, new int[]{mGLView .getTop(),mGLView .getLeft(),mGLView .getWidth(),mGLView .getHeight()}, 0, vector, 0);

Related

OpenGL drawing on Android combining with Unity to transfer texture through frame buffer cannot work

I'm currently making an Android player plugin for Unity. The basic idea is that I will play the video by MediaPlayer on Android, which provides a setSurface API receiving a SurfaceTexture as constructor parameter and in the end binds with an OpenGL-ES texture. In most other cases like showing an image, we can just send this texture in form of pointer/id to Unity, call Texture2D.CreateExternalTexture there to generate a Texture2D object and set that to an UI GameObject to render the picture. However, when it comes to displaying video frames, it's a little bit different since video playing on Android requires a texture of type GL_TEXTURE_EXTERNAL_OES while Unity only supports the universal type GL_TEXTURE_2D.
To solve the problem, I've googled for a while and known that I should adopt a kind of technology called "Render to texture". More clear to say, I should generate 2 textures, one for the MediaPlayer and SurfaceTexture in Android to receive video frames and another for Unity that should also has the picture data inside. The first one should be in type of GL_TEXTURE_EXTERNAL_OES (let's call it OES texture for short) and the second one in type of GL_TEXTURE_2D (let's call it 2D texture). Both of these generated textures are empty in the beginning. When bound with MediaPlayer, the OES texture will be updated during video playing, then we can use a FrameBuffer to draw the content of OES texture upon the 2D texture.
I've written a pure-Android version of this process and it works pretty well when I finally draw the 2D texture upon the screen. However, when I publish it as an Unity Android plugin and runs the same code on Unity, there won't be any pictures showing. Instead, it only displays a preset color from glClearColor, which means two things:
The transferring process of OES texture -> FrameBuffer -> 2D texture is complete and Unity do receive the final 2D texture. Because the glClearColor is called only when we draw the content of OES texture to FrameBuffer.
There are some mistakes during drawing happened after glClearColor, because we don't see the video frames pictures. In fact, I also call glReadPixels after drawing and before unbinding with the FrameBuffer, which is going to read data from the FrameBuffer we bound with. And it returns the single color's value that is same with the color we set in glClearColor.
In order to simplify the code I should provide here, I'm going to draw a triangle to a 2D texture through FrameBuffer. If we can figure out which part is wrong, we then can easily solve the similar problem to draw video frames.
The function will be called on Unity:
public int displayTriangle() {
Texture2D texture = new Texture2D(UnityPlayer.currentActivity);
texture.init();
Triangle triangle = new Triangle(UnityPlayer.currentActivity);
triangle.init();
TextureTransfer textureTransfer = new TextureTransfer();
textureTransfer.tryToCreateFBO();
mTextureWidth = 960;
mTextureHeight = 960;
textureTransfer.tryToInitTempTexture2D(texture.getTextureID(), mTextureWidth, mTextureHeight);
textureTransfer.fboStart();
triangle.draw();
textureTransfer.fboEnd();
// Unity needs a native texture id to create its own Texture2D object
return texture.getTextureID();
}
Initialization of 2D texture:
protected void initTexture() {
int[] idContainer = new int[1];
GLES30.glGenTextures(1, idContainer, 0);
textureId = idContainer[0];
Log.i(TAG, "texture2D generated: " + textureId);
// texture.getTextureID() will return this textureId
bindTexture();
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_LINEAR);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_S, GLES30.GL_CLAMP_TO_EDGE);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_CLAMP_TO_EDGE);
unbindTexture();
}
public void bindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureId);
}
public void unbindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
}
draw() of Triangle:
public void draw() {
float[] vertexData = new float[] {
0.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f
};
vertexBuffer = ByteBuffer.allocateDirect(vertexData.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(vertexData);
vertexBuffer.position(0);
GLES30.glClearColor(0.0f, 0.0f, 0.9f, 1.0f);
GLES30.glClear(GLES30.GL_DEPTH_BUFFER_BIT | GLES30.GL_COLOR_BUFFER_BIT);
GLES30.glUseProgram(mProgramId);
vertexBuffer.position(0);
GLES30.glEnableVertexAttribArray(aPosHandle);
GLES30.glVertexAttribPointer(
aPosHandle, 3, GLES30.GL_FLOAT, false, 12, vertexBuffer);
GLES30.glDrawArrays(GLES30.GL_TRIANGLE_STRIP, 0, 3);
}
vertex shader of Triangle:
attribute vec4 aPosition;
void main() {
gl_Position = aPosition;
}
fragment shader of Triangle:
precision mediump float;
void main() {
gl_FragColor = vec4(0.9, 0.0, 0.0, 1.0);
}
Key code of TextureTransfer:
public void tryToInitTempTexture2D(int texture2DId, int textureWidth, int textureHeight) {
if (mTexture2DId != -1) {
return;
}
mTexture2DId = texture2DId;
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, mTexture2DId);
Log.i(TAG, "glBindTexture " + mTexture2DId + " to init for FBO");
// make 2D texture empty
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGBA, textureWidth, textureHeight, 0,
GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, null);
Log.i(TAG, "glTexImage2D, textureWidth: " + textureWidth + ", textureHeight: " + textureHeight);
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
fboStart();
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER, GLES30.GL_COLOR_ATTACHMENT0,
GLES30.GL_TEXTURE_2D, mTexture2DId, 0);
Log.i(TAG, "glFramebufferTexture2D");
int fboStatus = GLES30.glCheckFramebufferStatus(GLES30.GL_FRAMEBUFFER);
Log.i(TAG, "fbo status: " + fboStatus);
if (fboStatus != GLES30.GL_FRAMEBUFFER_COMPLETE) {
throw new RuntimeException("framebuffer " + mFBOId + " incomplete!");
}
fboEnd();
}
public void fboStart() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, mFBOId);
}
public void fboEnd() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, 0);
}
And finally some code on Unity-side:
int textureId = plugin.Call<int>("displayTriangle");
Debug.Log("native textureId: " + textureId);
Texture2D triangleTexture = Texture2D.CreateExternalTexture(
960, 960, TextureFormat.RGBA32, false, true, (IntPtr) textureId);
triangleTexture.UpdateExternalTexture(triangleTexture.GetNativeTexturePtr());
rawImage.texture = triangleTexture;
rawImage.color = Color.white;
Well, code above will not display the expected triangle but only a blue background. I add glGetError after nearly every OpenGL functions call while no errors are thrown.
My Unity version is 2017.2.1. For Android build, I shut down the experimental multithread rendering and other settings are all default(no texture compression, not use development build, so on). My app's minimum API level is 5.0 Lollipop and target API level is 9.0 Pie.
I really need some help, thanks in advance!
Now I found the answer: If you want to do any drawing jobs in your plugin, you should do it at native layer. So if you want to make an Android plugin, you should call OpenGL-ES APIs at JNI instead of Java side. The reason is that Unity only allows drawing graphics on its rendering thread. If you simply call OpenGL-ES APIs like I did at Java side as in question description, they will actually run on Unity main thread instead of rendering thread. Unity provides a method, GL.IssuePluginEvent, to call your own functions on rendering thread but it needs native coding since this function requires a function pointer as its callback. Here is a simple example to use it:
At JNI side:
// you can copy these headers from https://github.com/googlevr/gvr-unity-sdk/tree/master/native_libs/video_plugin/src/main/jni/Unity
#include "IUnityInterface.h"
#include "UnityGraphics.h"
static void on_render_event(int event_type) {
// do all of your jobs related to rendering, including initializing the context,
// linking shaders, creating program, finding handles, drawing and so on
}
// UnityRenderingEvent is an alias of void(*)(int) defined in UnityGraphics.h
UnityRenderingEvent get_render_event_function() {
UnityRenderingEvent ptr = on_render_event;
return ptr;
}
// notice you should return a long value to Java side
extern "C" JNIEXPORT jlong JNICALL
Java_com_abc_xyz_YourPluginClass_getNativeRenderFunctionPointer(JNIEnv *env, jobject instance) {
UnityRenderingEvent ptr = get_render_event_function();
return (long) ptr;
}
At Android Java side:
class YourPluginClass {
...
public native long getNativeRenderFunctionPointer();
...
}
At Unity side:
private void IssuePluginEvent(int pluginEventType) {
long nativeRenderFuncPtr = Call_getNativeRenderFunctionPointer(); // call through plugin class
IntPtr ptr = (IntPtr) nativeRenderFuncPtr;
GL.IssuePluginEvent(ptr, pluginEventType); // pluginEventType is related to native function parameter event_type
}
void Start() {
IssuePluginEvent(1); // let's assume 1 stands for initializing everything
// get your texture2D id from plugin, create Texture2D object from it,
// attach that to a GameObject, and start playing for the first time
}
void Update() {
// call SurfaceTexture.updateTexImage in plugin
IssuePluginEvent(2); // let's assume 2 stands for transferring TEXTURE_EXTERNAL_OES to TEXTURE_2D through FrameBuffer
// call Texture2D.UpdateExternalTexture to update GameObject's appearance
}
You still need to transfer texture and everything about it should happen at JNI layer. But don't worry, they are nearly the same as I did in question description but only in a different language than Java and there are a lot of materials about this process so you can surely make it.
Finally let me address the key to solve this problem again: do your native stuff at native layer and don't be addicted to pure Java... I'm totally surprised that there are no blog/answer/wiki to tell us just write our code in C++. Although there are some open-source implementations like Google's gvr-unity-sdk, they give a complete reference but you'll still be doubt that maybe you can finish the task without writing any C++ code. Now we know that we can't. However, to be honest, I think Unity have the ability to make this progress even easier.

Disappearing 3D texture with libgdx

this is a bug that has been troubling and demotivating me for several days now. Perhaps someone has some insight.
I build a terrain mesh programmatically and set up a renderable for it, with a material that has a texture attribute with a repeating texture created from a PNG file. All of this appears to work fine until the camera is moved for a while, along, say, the x or z axes, at which time the texture will initially flicker to black, and eventually stay black, if the camera is moved far enough away.
I clear the screen with blue – so the mesh is still getting rendered, just without the texture – so it is solid black. This happens at the same spots with regards to camera placement. Once the camera is far enough away, the problem persists if the camera continues to move away from the world origin.
The problem does not appear on desktop, only android. I am using an HTC EVO 4G to test. I realize that this phone's early Adreno GPU has some serious 3D issues, but I've surely seen games, or at least demos, with large textured landscapes work fine. My mesh has around 5000 triangles and usually renders around 30-40 fps.
Some things I have experimented with:
Change PNG compression to 0 in the GIMP export options
Used JPG instead of PNG
Explictly specifying no mipmapping
Changed filtering modes
PNG files of different sizes and resolutions
Check glGetError
Here is the relevant code:
private void createMesh(){
mesh = new Mesh(true, vertices.length, indices.length,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.Normal, 3, "a_normal"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord0"));
float vertexData[] = new float[vertices.length * 8];
for (int x = 0; x < vertices.length; x++) {
vertexData[x*8] = vertices[x].position.x;
vertexData[x*8+1] = vertices[x].position.y;
vertexData[x*8+2] = vertices[x].position.z;
vertexData[x*8+3] = vertices[x].normal.x;
vertexData[x*8+4] = vertices[x].normal.y;
vertexData[x*8+5] = vertices[x].normal.z;
vertexData[x*8+6] = vertices[x].texCoord.x;
vertexData[x*8+7] = vertices[x].texCoord.y;
}
mesh.setVertices(vertexData);
mesh.setIndices(indices);
}
The texture repeats across the terrain, but I've commented out the setWrap line before and the problem
persists.
private void createRenderable(Lights lights){
texture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
TextureAttribute textureAttribute1 = new TextureAttribute(TextureAttribute.Diffuse, texture);
terrain = new Renderable();
terrain.mesh = mesh;
terrain.worldTransform.trn(-xTerrainSize/2, (float) -heightRange, -zTerrainSize/2);
terrain.meshPartOffset = 0;
terrain.meshPartSize = mesh.getNumIndices();
terrain.primitiveType = GL10.GL_TRIANGLE_STRIP; // or GL_TRIANGLE_STRIP etc.
terrain.material = new Material(textureAttribute1);
terrain.lights = lights;
}
Note that decreasing the far clipping plane does not make a difference.
...
lights = new Lights();
lights.ambientLight.set(0.2f, 0.2f, 0.2f, 1f);
lights.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f));
cam = new PerspectiveCamera(90, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
cam.position.set(0, 0, 0);
cam.lookAt(0,0,-1);
cam.near = 0.1f;
cam.far = 1000f;
cam.update();
...
#Override
public void render () {
...
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.graphics.getGL10().glClearColor( 0, 0, 1, 1 );
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
modelBatch.begin(cam);
modelBatch.render(terrain);
modelBatch.end();
...
}
Any insight into this problem would be much-appreciated. Unfortunately, it is difficult to screen capture on an EVO 4G, or else I would post a picture.

gluUnProject always returns zero

I need gluUnProject to convert screen coordinates to world coordinates and right now I just about have it working. When my app runs it accurately tells me the coordinates on screen which I know are stored in my renderer thread and then pumps out screen coordinates. Unfortunately the screen coordinates seem to have no effect of world coordinates and the world coordinates remain at zero.
Here is my gluUnProject method
public void vector3 (GL11 gl){
int[] viewport = new int[4];
float[] modelview = new float[16];
float[] projection = new float[16];
float winx, winy, winz;
float[] newcoords = new float[4];
gl.glGetIntegerv(GL11.GL_VIEWPORT, viewport, 0);
((GL11) gl).glGetFloatv(GL11.GL_MODELVIEW_MATRIX, modelview, 0);
((GL11) gl).glGetFloatv(GL11.GL_PROJECTION_MATRIX, projection, 0);
winx = (float)setx;
winy = (float)viewport[3] - sety;
winz = 0;
GLU.gluUnProject(setx, (float)viewport[3] - sety, 0, modelview, 0, projection, 0, viewport, 0, newcoords, 0);
posx = (int)newcoords[0];
posy = (int)newcoords[1];
posz = (int)newcoords[2];
Log.d(TAG, "x= " + String.valueOf(posx));
Log.d(TAG, "y= " + String.valueOf(posy));
Log.d(TAG, "z= " + String.valueOf(posz));
}
Now I've searched and found this forum post and they came to the conclusion that it was to do with using getFloatv instead of getDoublev, but getDoublev does not seem to be supported by GL11
The method glGetDoublev(int, float[], int) is undefined for the type GL11
and also
The method glGetDoublev(int, double[], int) is undefined for the type GL11
should the double and float thing matter and if so how do I go about using doubles
Thank you
EDIT:
I was told that gluUnproject fails when too close to the near far clipping plane so I set winz to -5 when near is 0 and far is -10. This had no effect on the output.
I also logged each part of the newcoords[] array and they all return something that is NaN (not a number) could this be the problem or something higher up in the algorithm
I'm guessing you're working on the emulator? Its OpenGL implementation is rather buggy, and after testing I found that it returns all zeroes for the following calls:
gl11.glGetIntegerv(GL11.GL_VIEWPORT, viewport, 0);
gl11.glGetFloatv(GL11.GL_MODELVIEW_MATRIX, modelview, 0);
gl11.glGetFloatv(GL11.GL_PROJECTION_MATRIX, projection, 0);
The gluUnProject() function needs to calculate the inverse of the combined modelview-projection matrix, and since these are all zeroes, the inverse does not exist and will consist of NaNs. The resulting newcoords vector is therefor also all Nans.
Try it on a device with a proper OpenGL implementation, it should work. Keep in mind to still divide by newcoords[3] though ;-)

VBOs Extremely slow on real hardware (LG Optimus V 670) - Android 2.2.1

I'm working on a 2d OpenGL graphics engine for my android game, so far I have implemented basic non textured quad rendering via VBOs.
To do this my graphics engine creates a VBO of a quad when initialized and upon rendering
draws it using location/dimensions specified by a Polygon2D object.
When rendering 30 - 50 quads on actual hardware (LG Optimus V 670) the frame rate is around 5 - 10 and on the emulator it is 30 - 40.
Here's the code to give a better understanding
public void CreateBuffers(GL10 gl)
{
GL11 gl11 = (GL11)gl;
mQuadBuffer = ByteBuffer.allocateDirect(QUAD2D.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
mQuadBuffer.put(QUAD2D, 0, QUAD2D.length);
mQuadBuffer.flip();
int[] buffer = new int[1];
gl11.glGenBuffers(1, buffer, 0);
mQuadVBOId = buffer[0];
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, mQuadVBOId);
gl11.glBufferData(GL11.GL_ARRAY_BUFFER, mQuadBuffer.capacity() * 4, mQuadBuffer, GL11.GL_DYNAMIC_DRAW);
}
public void draw(GL10 gl) {
GL11 gl11 = (GL11)gl;
Polygon2D poly;
int length = mPolygons.size();
gl.glEnableClientState(GL11.GL_VERTEX_ARRAY);
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, mQuadVBOId);
gl11.glVertexPointer(2, GL11.GL_FLOAT, 0, 0);
for(int i = 0; i < length; i++)
{
poly = mPolygons.get(i);
gl11.glPushMatrix();
gl11.glTranslatef(poly.x, poly.y, 0);
gl11.glScalef(poly.width, poly.height, 0);
gl11.glDrawArrays(GL11.GL_LINE_LOOP, 0, 4);
gl11.glPopMatrix();
}
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, 0);
gl.glDisableClientState(GL11.GL_VERTEX_ARRAY);
}
Am I doing something wrong, other OpenGl applications seem to run fine such as Replica Island.
I doubt this is useful but here are the specs http://pdadb.net/index.php?m=specs&id=2746&c=lg_vm670_optimus_v
The VBO's are not slow, but the many separate draw calls are. You should combine all of the quads in one VBO (i.e. calculate the vertex positions for each quad) and draw it with one call. To be able to do that, you will also have to change from using GL_LINE_LOOP to GL_LINES to get separation between the quads.

android OpenGL ES simple Tile generator performance problem

following this question : Best approach for oldschool 2D zelda-like game
Thank to previous replies, and with a major inspiration from http://insanitydesign.com/wp/projects/nehe-android-ports/ , i started to build a simple Tile Generator for my simple 2D zelda-like game project.
I can now generate a map with the same textured tile, using 2 for(..) imbricated iterations to draw horizontal and vertical tiles, and got some basic DPAD key input listeners to scroll over the x and y axis.
but now im running into my first performance problems, just with one texture and one model.
When trying to build a 10x10 map, scrolling is fine and smooth.
When trying with 50x50, things get worse, and with a 100x100, its way unacceptable.
Is there a way only to tell OpenGL to render the 'visible' part of my mapset and ignore the hidden tiles? im a totally new to this.
im using
GLU.gluLookAt(gl, cameraPosX, cameraPosY, 10.0f,cameraPosX, cameraPosY, 0.0f, 0.0f, 1.0f, 0.0f);
to set the camera and point of view for a 2D-style feeling.
Any help ? :)
for (int j = 0; j < 10; j++) {
for (int i = 0; i < 10; i++) {
gl.glPushMatrix(); // Sauvegarde la matrice sur le stack
//Bind the texture according to the set texture filter
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[filter]);
//Set the face rotation
gl.glFrontFace(GL10.GL_CW);
//Enable texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
//Enable vertex state
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
//Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
//point to our texture buff
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glTranslatef(1.0f, 0.0f, 0.0f); // on avance d'une tile
}
// on va commencer a dessiner la 2e ligne
gl.glPopMatrix(); // Rappelle la matrice sur le stack
gl.glTranslatef(0.0f, -1.0f, 0.0f);
}
The reason why the loop gets slow is that it makes OpenGL to do lots of unnecessary work. This is because there are lots of redundant state changes.
That means that you are calling gl functions with parameters that doesn't have any effect. Calling these functions eat up a lot of CPU time and might cause the whole OpenGL pipeline to stall as it cannot work very effectively.
For example you should call glBindTexture only if you want to change the texture used. The above code binds the same texture over and over again in the inner loop which is very expensive. Similarly you don't need to enable and disable texture coordinate and vertex arrays in the inner loop. Even setting texture coordinate pointer and vertex pointer in the inner loop is unnecessary as they don't change between subsequent loops.
The bottom line is, that in the inner loop you should only change translation and call glDrawArrays. Everything else just eats up resources for nothing.
There are more advanced things you can do to speed this up even more. Tile background can be drawn so that it causes only one call to glDrawArrays (or glDrawElements). If you are interested in, you should Google topics like batching and texture atlases.
You can easily make your loop to draw only the visible aria.
Here is some example how it needs to be done. I don't know the android API so thread my example as metacode.
int cols = SCREEN_WIDTH / TILE_SIZE + 1; // how many columns can fit on the screen
int rows = SCREEN_HEIGHT / TILE_SIZE + 1; // haw many rows can fit on the screen
int firstVisibleCol = cameraPosX / TILE_SIZE; // first column we need to draw
int firstVisibleRow = cameraPosY / TILE_SIZE; // first row we need to draw
// and now the loop becomes
for (int j = firstVisibleRow; j < rows; j++) {
for (int i = firstVisibleCol ; i < cols; i++) {
...
}
}

Categories

Resources