How to load a Mesh (3dmax) in Android OpenGL into a ByteBuffer ?
example:
float triangleCoords[] = {
-0.5f, -0.25f , 0, // 0,
0.5f, -0.25f , 0, // 1,
0.0f, 0.559016994f, 0 // 2,
};
ByteBuffer vbb = ByteBuffer.allocateDirect(triangleCoords.length * _4_FLOAT_LENGTH);
vbb.order(ByteOrder.nativeOrder());
triangleVB = vbb.asFloatBuffer();
triangleVB.put(triangleCoords);
triangleVB.position(0);
how to load a mesh like an array or Coords or ineed a library for this job
It basically boils down to loading the data from the file into your data structures (like the vertex array from your code snippet) and providing these to OpenGL for rendering (in your case glDrawArrays, for example). This is generally the same procedure, no matter if working with Android, Windows, Plan9, OpenGL or Direct3D. Only the implementation details matter, therefore this question is a bit broad.
You can look at this description of the 3DS file format (hinted in your question), which also has links to tutorials for loading these files. Although the 3DS format is a quite easy to read binary format, for a start you might also look into the Wavefront OBJ format, a simple ASCII file format that nearly every modeling software can export to (but this should also hold for 3DS). Loading these formats into a bunch of simple vertex arrays should boil down to some few lines of code.
These should get you started. If you encounter any specific problems implementing these mesh loading functionalities on your particular platform, feel free to ask a more specific question.
Related
I have a FloatBuffer as an output from the neural network, where the RGB channels are encoded with [-1 .. +1] values. I would like to render them on-screen, using GLSurfaceView. What is the best way to handle it?
I can dump the buffer into SSBO and write a compute shader, which maps it to ByteBuffer of [0 .. 255] range, then somehow bind it to regular texture. Or maybe I can set up my compute shader to output directly to some texture buffer? Or maybe I am supposed to read my SSBO directly from the fragment shader (and implement my own linear interpolation)?
So, which is the best way to render stuff via OpenGL ES? Please, help.
You can try to load it with but it depends how many update you need per seconde. That is to test with you machine.
First Bind your texture (you must create one) then when your input buffer is ready use
GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, width, height, GLES30.GL_RGB, GLES20.GL_FLOAT, InputFloatBuffer);
It work well with ByteBuffer and i did not try with Float but there is no Signed_float format.
Use a kernel to change the signed float to byte.
this is how i am made an array of triangle
float[] tableVerticesWithTriangle = {
// triangle 1
0f, 0f, 9f, 14f, 0f, 14f,
// triangle 2
0f, 0f, 9f, 0f, 9f, 14f
};
and this is how i have allocated the block in native environment
vertexData = ByteBuffer
.allocateDirect(
tableVerticesWithTriangle.length * BYTES_PER_FLOAT)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexData.put(tableVerticesWithTriangle);
The reason people use ByteBuffer.allocateDirect() is that other buffer classes, like FloatBuffer, do not have an allocateDirect() method. Only ByteBuffer can be allocated as a direct buffer. So allocating a ByteBuffer, and then using the memory as a FloatBuffer, is the only way to get a directly allocated FloatBuffer.
What is a direct buffer?
The documentation of isDirect() of the FloatBuffer class explains it like this:
Indicates whether this buffer is direct. A direct buffer will try its best to take advantage of native memory APIs and it may not stay in the Java heap, so it is not affected by garbage collection.
A float buffer is direct if it is based on a byte buffer and the byte buffer is direct.
In other (less formal) words, a native buffer is a native memory allocation that Java is not messing with.
When are direct buffers required?
Strangely enough, I have never been able to find clear documentation for this. So the following is a hypothesis that I confirmed with experiments, without finding any counter-examples so far.
Direct buffers have to be used when a buffer is passed to an OpenGL API where the memory is used by the OpenGL implementation after the call returns.
There is only one example of this I could find: client side vertex arrays (which BTW are marked as a legacy feature in ES 3.0, but still supported). This is the glVertexAttribPointer() call with the following signature, which supports vertex arrays without the use of VBOs:
glVertexAttribPointer(int indx, int size, int type, boolean normalized,
int stride, Buffer ptr)
In this case, OpenGL will pull vertex data from the buffer in later draw calls, so the buffer content has to remain accessible to OpenGL after the call returns, and will potentially be read directly by the GPU.
In all other cases (again according to my hypothesis), it is not necessary to use direct buffers. You can for example do the following:
float[] vertexData = {...};
GLES20.glBufferData(GL_ARRAY_BUFFER, vertexData.length * 4,
FloatBuffer.wrap(vertexData), GLES20.GL_STATIC_DRAW);
The glBufferData() call consumes the data during the call, and the original buffer can not be accessed by OpenGL after the call returns. Therefore, it is not necessary to use a direct buffer.
On Android using OpenGL ES 2.0 I try to perform certain performance tests using different internal texture formats.
Initially I have a lot of RGBA textures (png) which I want to load and store internally in a different format with OpenGL (for example RGB and LUMINANCE). I load my textures using glTexImage2D like this:
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),resourceId);
...
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer b = ByteBuffer.allocate(size);
bitmap.copyPixelsToBuffer(b);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(),
bitmap.getHeight(), 0, GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE, b);
This works fine, however if I change the first GLES20.GL_RGBA (The internalFormat parameter) to anything else (GLES20.GL_RGB or GLES20.GL_LUMINANCE) my texture appears all black. Changing the second GLES20.GL_RGBA to the same value will display something - but obviously not correctly as the original data is RGBA.
I thought maybe it has something to with the shader code - that maybe texture2D(..) returns a different value because the internal format of the texture is different. My shader code is simply:
gl_FragColor = texture2D(texture, fragment_texture_coordinate);
I tried changing this around too, but no luck yet. So I thought maybe glTex2DImage is not at all working as I think it does (I am not an expert on this area whatsoever).
What am I doing wrong?
Edit:
I overlooked this little detail on texImage2D. It appears that:
internalformat must match format. No conversion between formats is supported during texture image processing. type may be used as a hint to specify how much precision is desired, but a GL implementation may choose to store the texture array at any internal resolution it chooses.
What I gather from this, is that if you want to store your textures different from their original format you'll have to convert it yourself.
Your fragment shader must be written to agree with the format you are giving to glTexImage2D(). For GL_RGB, it should force the alpha to 1.0, like this:
vec3 Color_RGB = texture2D(sampler2d, texCoordinate);
gl_FragColor = vec4(Color_RGB, 1.0);
But, for GL_RGBA, it should look like this:
vec4 Color_RGBA = texture2D(sampler2d, texCoordinate);
gl_FragColor = Color_RGBA;
And, as has been discussed, you can only use the Android Bitmap class for textures if your PNG files have no transparency. This article explains that:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1
I am trying to batch draw a bunch of lines on Android using OpenGL ES 2.0 and I need to know the best way to do this.
Right now I made a class called LineEngine which builds up a FloatBuffer of all the vertices to draw and then draws all the lines at once. The problem is that apparently FloatBuffer.put() is very slow and is gobbling up CPU time like crazy.
Here is my class
public class LineEngine {
private static final float[] IDENTIY = new float[16];
private FloatBuffer mLinePoints;
private FloatBuffer mLineColors;
private int mCount;
public LineEngine(int maxLines) {
Matrix.setIdentityM(IDENTIY, 0);
ByteBuffer byteBuf = ByteBuffer.allocateDirect(maxLines * 2 * 4 * 4);
byteBuf.order(ByteOrder.nativeOrder());
mLinePoints = byteBuf.asFloatBuffer();
byteBuf = ByteBuffer.allocateDirect(maxLines * 2 * 4 * 4);
byteBuf.order(ByteOrder.nativeOrder());
mLineColors = byteBuf.asFloatBuffer();
reset();
}
public void addLine(float[] position, float[] color){
mLinePoints.put(position, 0, 8); //These lines
mLineColors.put(color, 0, 4); // are taking
mLineColors.put(color, 0, 4); // the longest!
mCount++;
}
public void reset(){
mLinePoints.position(0);
mLineColors.position(0);
mCount = 0;
}
public void draw(){
mLinePoints.position(0);
mLineColors.position(0);
GraphicsEngine.setMMatrix(IDENTIY);
GraphicsEngine.setColors(mLineColors);
GraphicsEngine.setVertices4d(mLinePoints);
GraphicsEngine.disableTexture();
GLES20.glDrawArrays(GLES20.GL_LINES, 0, mCount * 2);
GraphicsEngine.disableColors();
reset();
}
}
Is there a better way to batch all these lines together?
What you are trying to do is called SpriteBatching. If you want your program to be robust in openGL es you have to check the following( a list of what makes your program slow ) :
-Changing to many opengl ES states each frame.
-Enable//Dissable (textures etc) again and again. Once you enable something you dont have to do it again it will be applied automaticly its frame.
-Using to many assets instead of spriteSheets. Yes thats make a huge performance impact. For example if you have 10 images you have to load for each image a different texture and that is SLOW. A better way to do this is to create a spriteSheet.(You can check google for free spriteSheet creators).
-Your images are high quality. Yes! check your resolution and file extension. Prefer .png files.
-Care for the api 8 float buffer bug(you have to use int buffer instead to fix the bug).
-Using java containers. This is the biggest pitfall, if you use string concatenation(that's not a container but it returns a new string each time) or a List or any other container that RETURNS A NEW CLASS INSTANCE chances are your program will be slow due to garbage collection. For input handling i would suggest you to search a teqnique called the Pool Class. Its use is to recycle objects instead of creating new ones. Garbage collector is the big enemy, try to make him happy and avoid any programming technique that might call him.
-Never load things on the fly, instead make a loader class and load all the necessary assets in the begining of the app.
If you do implement this list then your chances are your program will be robust.
One last adition. What is a sprite batcher? A spriteBatcher is a class that uses a single texture to render multiple objects. It will create automaticly vertices, indices, colors, u - v coords and it will render them as a batch. This pattern will save GPU power as well as cpu power. From your posted code i can't be sure what causes the cpu to slow down but from my experience is due to one(or more) things of the list i previously mention. Check the list, follow it, search google for spriteBatching and i am sure your program will run fast. Hope i helped!
Edit: I think i found what causes your program to slow down, you dont flip the buffer dude! You just reset the position. You just add add add more objects and you cause buffer overload. In the reset method just flip the buffer. mLineColors.flip mLinePaints.flip will do the job. Make sure you call them each frame if you send new verices each frame.
FloatBuffer.put(float[]) with a relatively large float array should be considerably faster. The single put(float) calls have plenty of overhead.
Just go for a very simple native function which will be called from your class. You can put float[] to OpenGL directly, no need to kill CPU time with a silly buffer interface.
I did a lot of search and nothing solved my problem. I'm both new to android and to 3d programming. I'm working on an Android project where I need to draw a 3d object on the android device using opengl es. For each pixel I have Distance value between 200 and 9000, which needs to be mapped as a Z coordinate. The object is 320x240.
The questions are:
How do I map from (x,y,z) to opengl es coordinate system? I have created a vertex array whose values are {50f, 50f, 400f, 50f, 51f, 290f, ...}. Each pixel is represented as 3 floats (x,y,z).
How can this vertex array be drawn using opengl on an android?
Is it possible to draw 320*240 pixels using OpenGl ES?
OpenGL doesn't really work well with large numbers (like anything over 10.0f, just the way it is designed). It would be better to convert your coordinates to be between -1 and 1 (i.e. normalize) than to try and make openGL use coordinates of 50f or 290f.
The reason the coordinates are normalized to between -1 and 1 is because model coordinates are only supposed to be relative to each other and not indicative of their actual dimensions in a specific game/app. The model could be used in many different games/apps with different coordinate systems, so you want all the model coordinates to be in some normalized standard form, so the programmer can then interpret in their own way.
To normalize, you loop through all your coordinates and find the value furthest from 0 i.e.
float maxValueX = 0;
float maxValueY = 0;
float maxValueZ = 0;
// find the max value of x, y and z
for(int i=0;i<coordinates.length'i++){
maxValueX = Math.max(Math.abs(coordinates[i].getX()), maxValueX);
maxValueY = Math.max(Math.abs(coordinates[i].getY()), maxValueY);
maxValueZ = Math.max(Math.abs(coordinates[i].getZ()), maxValueZ);
}
// convert all the coordinates to be between -1 and 1
for(int i=0;i<coordinates.length'i++){
Vector3f coordinate = coordinates[i];
coordinate.setX(coordinate.getX() / maxValueX);
coordinate.setY(coordinate.getY() / maxValueY);
coordinate.setZ(coordinate.getZ() / maxValueZ);
}
You only need to do this once. Assuming you are storing your data in a file, you can write a little utility program that does the above to the file and save it, rather than doing it every time you load the data into your app
Checkout the GLSurfaceView Activity in the APIDemos that ship with the Android SDK. That will give you a basic primer on how Android handles rendering through OpenGL ES. This is located in android-sdk/samples/android-10/ApiDemos. Make sure you have downloaded the 'Samples for SDK' under the given API level.
Here's a couple of resources to get you started as well:
Android Dev Blog on GLSurfaceView
Instructions on OpenGLES
Android Development Documentation on OpenGL
Hope that helps.
Adding to what James had mentioned about normalizing to [-1,1].
A little bit of code :
FIll in data in a flat array as x,y,z assuming you are using a vertex shader similar to :
"attribute vec3 coord3d;" +
"uniform mat4 transform;" +
"void main(void) {" +
" gl_Position = transform * vec4(coord3d.xyz, 1.0f);" + // size of 3 with a=1.0f for all points
" gl_PointSize = 10.0;"+
"}"
Get the attribute :
attribute_coord3d = glGetAttribLocation(program, "coord3d");
Create VBO:
glGenBuffers(1, vbo,0);
Bind
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
Put data in:
glBufferData(GL_ARRAY_BUFFER, size:SIZE_OF_ARRAY, makeFloatBuffer(FlatArray), GL_STATIC_DRAW);
where makeFloatBuffer is a function that creates a buffer:
private FloatBuffer makeFloatBuffer(float[] arr) {
ByteBuffer bb = ByteBuffer.allocateDirect(arr.length*4);
bb.order(ByteOrder.nativeOrder());
FloatBuffer fb = bb.asFloatBuffer();
fb.put(arr);
fb.position(0);
return fb;
}
Bind and Point to buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glEnableVertexAttribArray(attribute_coord3d);
glVertexAttribPointer(attribute_coord3d,
size:3,GL_FLOAT,false,stride:vertexStride, 0);
where vertexStride = num_components*Float.BYTES; in our case num_components = 3 // x,y,z.
Draw:
glDrawArrays(GL_POINTS, 0, NUM_OF_POINTS);
Disable VBO:
glDisableVertexAttribArray(attribute_coord2d);