Starfield optimization in libgdx - android

I want to create a static starfield in libgdx.
My first way was: create a Decal and a DecalBatch over it.
When I draw the Decal I use a Billboarding technic on the Decal
star.decal.setRotation(camera.direction, camera.up);
next: I wanted to animate the alphas on the decals, so I created on a random way some time:
star.decal.setColor(1, 1, 1, 0.6f+((float) Math.random()*0.4f) );
It is working, but my FPS went down from 55 FPS to 25 FPS (because of my 500-1000 stars)
Can I use only one batch call in any way? Maybe a particleMaterial with only one Vertex list and with a GL_POINT mode that is always face to front of my camera?
How can I do this in libgdx?

The Batch is way to complex than what you need , on every frame it needs to copy all the vertices of the sprites in another array and do calculations on them to find the scale rotation etc..
As you suspect GL_POINT sprites will be way faster and in a medium range device it should be able to render in 60 fps like 2000 points that have different position and color
here is some old code of mine ,its in c and it uses opengl es 1.1 and propably there will be a more simple way to do it in libgdx
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable (GL_POINT_SPRITE_OES);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, TXTparticle);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(30);
glColorPointer(4, GL_FLOAT, 32, particlesC);//particlesC the vertices color
glVertexPointer(3, GL_FLOAT, 24, particlesV);//particlesV the vertices
glDrawArrays(GL_POINTS, 0, vertvitLenght/6);
glDisable( GL_POINT_SPRITE_OES );
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);

Related

how to assign one texture to another efficiently in OpenGL ES for Android

I want to copy texture1 to texture2. The background is that I generate 15 empty texture id and bind them to GLES11Ext.GL_TEXTURE_EXTERNAL_OES, the following is my code:
int[] textures = new int[15];
GLES20.glGenTextures(15, textures, 0);
GlUtil.checkGlError("glGenTextures");
for(int i=0;i<15;i++)
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[i]);
then I always transfer camera preview to textures[0], I want copy the texture from textures[0] to textures[1] in order to keep the frame content of timestamp 1, and copy the texture from textures[0] to textures[2] in order to keep the frame content of timestamp 2... it looks like buffer some texture data in GPU and render some of that in the future. So I want to know is there anyway to do this? And can I just use textures[2]=textures[0] to copy texture data?
There isn't a very direct way to copy texture data in ES 2.0. The easiest way is probably using glCopyTexImage2D(). To use this, you have to create an FBO, and attach the source texture to it. Say if srcTexId is the id of the source texture, and dstTexId the id of the destination texture:
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, srcTexId, 0);
glBindTexture(GL_TEXTURE_2D, dstTexId);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
That being said, from your description, I don't believe that this is really what you should be doing, for the following reasons:
I don't think copying texture data as shown above will work for the external textures you are using.
Copying texture data will always be expensive, and sounds completely unnecessary to solve your problem.
It sounds like you want to keep the 15 most recent camera images. To do this, you can simply track which of your 15 textures contains the most recent image, and treat the list of the 15 texture ids as a circular buffer.
Say initially you create your texture ids:
int[] textures = new int[15];
GLES20.glGenTextures(15, textures, 0);
int newestIdx = 0;
Then every time you receive a new frame, you write it to the next entry in your list of texture ids, wrapping around at 15:
newestIdx = (newestIdx + 1) % 15;
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[newestIdx]);
// Capture new frame into currently bound texture.
Then, every time you want to use the ith frame, with 0 referring to the most recent, 1 to the frame before that, etc, you bind it with:
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[(newestIdx + i) % 15]);
So the textures never get copied. You just keep track of which texture contains which frame, and access them accordingly.

GLES20 calls in onDrawFrame causes long delay

I have written a game in which when onDrawFrame() is called I update the game state first (game logic and the buffers for drawing) and then proceed to do the actual drawings. On the Moto G and Nexus 7 everything works smoothly and each onDrawFrame() call takes only between 1-5ms. However, on the Samsung Galaxy S3, 90% of the time the onDrawFrame() call takes as long as 30-50ms to update.
Further investigating the issue I found the problem wholly lies on the first render method which I attach below:-
(Edit: the block is now on glclear(); having removed the unnecessary calls to get the handles, please refer to the comments)
public void render(float[] m, FloatBuffer vertex, FloatBuffer texture, ShortBuffer indices, int TextureNumber) {
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, 0, vertex);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mTexCoordLoc, 2, GLES20.GL_FLOAT, false, 0, texture);
GLES20.glEnableVertexAttribArray(mTexCoordLoc);
GLES20.glUniformMatrix4fv(mtrxhandle, 1, false, m, 0);
int mSamplerLoc = GLES20.glGetUniformLocation(fhGraphicTools.sp_Image, "s_texture");
GLES20.glUniform1i(mSamplerLoc, TextureNumber);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
// Disable vertex arrays used
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTexCoordLoc);
}
The above method is called 5 times in each onDrawFrame() to draw things on different texture atlas. (the first 4 are actually the game's background which is 1 long rectangle). Having logged the time it takes in each line of the code I found that the lag I am having on the S3 always reside in one of the below lines:
int mPositionHandle = GLES20.glGetAttribLocation(fhGraphicTools.sp_Image, "vPosition");
or
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
The lag only occurs in the first call of render() as the subsequent calls takes about 1-2ms each. I have tried disabling the first render() call but then the second which before only take 1-2ms now become the source of the lag, under the same lines.
Does anyone have an idea of what is wrong in my code which the S3 could not handle? It seem that the S3 have problem commencing GL calls at the beginning of each onDrawFrame(), but why is there such behavior is puzzling me. What can be done to decrease the "startup" delay?
Many thanks for your patience in reading.
Edited code to take Selvin's recommendation.
I have fixed the issue when I moved all my png image files from /drawable to /drawable-nodpi the game speed goes back to normal on the S3.
I assume the auto-scaling done by Android causes the raw bitmap I supplied to opengl in "bad" sizes causing unnecessary texture filtering work on each onDrawFrame, causing the call to miss out vsync and giving a long delay for the glClear while waiting for next vsync. Please correct me if I am wrong, still a newbie to graphics.

android - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.

How to draw/render a Bullet Physics collision body/shape?

I have implemented the Bullet Physics engine into my android program with the NDK (I am using Vuforia's imagetarget example for android), and it is set up and working correctly, however I would like to render/draw my collision boxes/planes to see my rigid bodies (btRigidBody)/collision shapes (btCollisionShape), I'm positive this is possible but I can't find any tutorials on how to do it!
I have taken the hello world Bullet physics tutorial on their wiki page and modified it to apply the transformations from the falling physics body to a 3d object I have in opengl es 2.0 to view the collision bodies, here is the code I am using to render to object:
void drawRigidBody(btRigidBody* body,QCAR::Matrix44F modelViewMatrix, unsigned int textureID)
{
btTransform trans;
body->getMotionState()->getWorldTransform(trans);
LOG("sphere pos: (x %f , y %f, z %f)",trans.getOrigin().getX(),trans.getOrigin().getY(),trans.getOrigin().getZ());
float physicsMatrix[16];
trans.getOpenGLMatrix(physicsMatrix);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&modelViewMatrix.data[0]);
QCAR::Matrix44F modelViewProjection, objectMatrix;
SampleUtils::multiplyMatrix(&modelViewMatrix.data[0], physicsMatrix, &objectMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &objectMatrix.data[0], &modelViewProjection.data[0]);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signVerts[0]);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signNormals[0]);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signTexCoords[0]);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
(GLfloat*)&modelViewProjection.data[0] );
glDrawArrays(GL_TRIANGLES, 0, signNumVerts);
}
EDIT: looking at the code for btBoxShape i noticed you can grab the box vertices and normals:
btVector3** vertices= wallShape->getVertices();
btVector3**normals = wallShape->getNormals();
but you can't grab a list of indices to draw the vertex points in a certain order!
If I recall correctly, this is not the proper way to draw debug shapes in Bullet. Did you read the user manual (PDF), page 16?
You are supposed to implement your own debug drawer class which implements btIDebugDraw, and in this class you implement the drawLine method.
You pass this debug drawer to bullet with setDebugDrawer, and then enable it with world->getDebugDrawer->setDebugMode(debugMode);
To draw the world, call world->debugDrawWorld();
This then calls drawLine on your custom function numerous times until a wireframe model of the physics world has been drawn.

OpenGL Depth Buffer issue on Android

I am developing a 3D Rendering Engine for Android. I have experienced some issues with the depth buffer. I am drawing some cubes, one big and two small ones that will fall on top of the bigger one. While rendering I have seen that obviously something with the depth buffer is wrong, as seen in this screen shot:
This screen shot was taken on an HTC Hero (running Android 2.3.4) with OpenGL ES 1.1 The whole application is (still) targeted at OpenGL ES 1.1. It does look the same on the Emulator.
These are the calls in my onSurfaceCreated method in the renderer:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.d(TAG, "onsurfacecreated method called");
int[] depthbits = new int[1];
gl.glGetIntegerv(GL_DEPTH_BITS, depthbits, 0);
Log.d(TAG, "Depth Bits: " + depthbits[0]);
gl.glDisable(GL_DITHER);
gl.glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
gl.glClearColor(1, 1, 1, 1);
gl.glClearDepthf(1f);
gl.glEnable(GL_CULL_FACE);
gl.glShadeModel(GL_SMOOTH);
gl.glEnable(GL_DEPTH_TEST);
gl.glMatrixMode(GL_PROJECTION);
gl.glLoadMatrixf(
GLUtil.matrix4fToFloat16(mFrustum.getProjectionMatrix()), 0);
setLights(gl);
}
The GL Call for the depth bits returns 16 on the device and 0 on the emulator. It would've made sense if it only didn't work on the emulator since there obviously is no depth buffer present. (I've tried setting the EGLConfigChooser to true, so it would create a Config with as close to 16 bits depth buffer as possible, but that didn't work on the emulator. It wasn't necessary on the device.)
In my onDrawFrame method I make the following OpenGL Calls:
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glClearDepthf(1);
And then for each of the cubes:
gl.glEnableClientState(GL_VERTEX_ARRAY);
gl.glFrontFace(GL_CW);
gl.glVertexPointer(3, GL_FIXED, 0, mVertexBuffer);
// gl.glColorPointer(4, GL_FIXED, 0, mColorBuffer);
gl.glEnableClientState(GL_NORMAL_ARRAY);
gl.glNormalPointer(GL_FIXED, 0, mNormalBuffer);
// gl.glEnable(GL_TEXTURE_2D);
// gl.glTexCoordPointer(2, GL_FLOAT, 0, mTexCoordsBuffer);
gl.glDrawElements(GL_TRIANGLES, mIndexBuffer.capacity(),
GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL_NORMAL_ARRAY);
gl.glDisableClientState(GL_VERTEX_ARRAY);
What am I missing? If more code is needed just ask.
Thanks for any advice!
I got it to work correctly now. The problem was not OpenGL. It was (as Banthar mentioned) a problem with the projection matrix. I am managing the projection matrix myself and the calculation of the final matrix was somehow corrupted (or at least not what OpenGL expects). I can't remember where I got the algorithm for my calculation, but once I changed it to the way OpenGL calculates the projection matrix (or directly call glFrustumf(...)) it worked fine.
try enabling:
-glDepthFunc(GL_LEQUAL)
-glDepthMask( true );

Categories

Resources