I'm working on a little game demo using OpenGL ES 1 and using a "BufferPool", which essentially is just a fancy interface for a direct ByteBuffer which I store all the vertex, color, and texcoord data in. I'm having some issues getting it to draw properly. I must be setting up the opengl state wrong, but I'm fairly new to OpenGL and can't tell what it is. I'd appreciate any help muchly :)
code: http://pastebin.com/bw3eM0TW
I've put it up on pastebin as there's a fair amount of code related to the BufferPool and rendering.
Related
I've been programming for about a couple years now, and I know the syntax of different languages pretty well, but I've spent most of my time making programs in basic Lua. I have played around with game developing with Java and C#, with DirectX and OpenGl, but most times I don't even know terms that others may know really well such as 'shaders' and 'VBOs'. I've heard of them, but I really don't know what they mean, so please explain to me like I'm five if you have a answer.
I'm trying to make a simple Android game using Open GL ES 2.0, and, I've seen many tutorials explaining how to draw one triangle, and not many. For my case, I have a world full of triangles, its simply an array of true and false's, where if there is a true a triangle will be drawn in that position, if false the triangle won't be drawn. I've made it so it will only draw triangle's until they are off screen, but I realized one problem still, and that is that rendering is still super slow, takes about 2 seconds to render one frame.
At the moment, I currently 1 triangle class, and when it checks if a triangle is supposed to belong where it is, it will create vertex points for that triangle and pass them into the triangle class, then creating a new triangle object with those updated vertices. Now I can see how wrong this can go, but it's honestly the only thing I can come up with, with my very-limited knowledge on OpenGL ES
What I am looking for is to draw all of these triangles, in their correct position, with the same color and size, in a very efficient way, so that it wouldn't take about 2 seconds to draw one frame.
If anyone has a solution, Thank you.
What you are looking for is for instanced draw calls that basically means that you are going to send one VBO (Vertex Buffer Object - this holds all the information about the vertex of a mesh) and an array of MVPM(Model View Projection Matrices - These hold all the transformations information of every draw you want to do of that VBO) to the shader, all this in one single draw call.
unfortunately for us (i'm using right now OpenGL Es 2.0 as well) there is no way provided by opengl to do this.
Any way, there is this article about fake that but i think is too much trouble.
In other words ir are pretty much trapped with a draw call per mesh.
OpenGL ES 2.0 is implemented in a project that I have been working on with a couple shader components that define what a texture should look like after modifications from a Bitmap. The SurfaceView will only ever have a single image in it for my project.
While doing several different approaches and looking through code in the past 24 hours, just hoping for a quick response or two from the community. Not looking for solutions, I'll do that research.
It sounds as though since we are using shaders, that in order to do scaling and movements in the texture based on touch events, that I will have have to use the Matrix utilities and OpenGL translations or movements with the camera to get the same effect as what is currently done within an ImageView. Would this be the appropriate approach? Perhaps even modify the shader code so that I have some additional input variables?
I don't believe that I can use anything on the Android side that would get the same effect, such as modifying the canvas of the SurfaceView or altering dimensions of the UI in some other fashion that would achieve the same effect?
Thanks. Again, solutions for zooming and moving around aren't necessary, just trying to get a grasp on intermixing OpenGL and Android appropriately for the task.
Why does it seem that several elements in 1.0 are easier than 2.0; ease of use should improve between releases.
Yes. You will need to use an ortho projection and adjust the extents to zoom. See this link here. To pan, you can simply use a glTranslatef.
If you would like to do this entirely in the pixel shader, you can use the texture matrix stack with glScalef and glTranslatef.
I'm currently working through a set of tutorials on Android OpenGL ES (1.1) and feel like I'm starting to get a grasp of how the vertices and textures work, along with some sprite animation. As I understand, the only primitives here are points, straight lines, and triangles.
I'm now trying to create a simple curve and really don't know where to start.
I want the curve to be drawn dynamically to represent something like a beam deflection like this where I could input a force and have the curve change.
Is it something I would create with a line loop or triangle fan with a ton of vertices? Or perhaps a texture that I then manipulate?
Any input or a point in the right direction is much appreciated, thanks.
I can recommend this blog post http://blog.uncle.se/2012/02/opengl-es-tutorial-for-android-part-ii-building-a-polygon/ sadly the original source returns a 404. Hopefully the link provides the same quality of information. Anyhow, a good read for openGL.
You have the general idea. Whatever you do must be made of lines, points or triangles. You can generate all the numbers for any pseudo-curve however you want, but you're always going to be passing the resulting vertices to OPENGL then connecting those with lines and triangles.
I'm making an android opengl es 2d app, and trying to use a part of my rendered screen as a texture for a billboard.
so far, i had partial success with glCopyTexSubImage - it only works on some phones.
everywhere i read recommends using frameBufferObject to render to texture, but i can't grasp how to use it, so if anyone can help me get this, i would thank them greatly.
if i use a FBO that is binded to a texture, is it possible to render just part of the screen? if not, isn't that a bit overkill? (also much more work texture mapping and moving the texture. that and the texture would have to be big enough for the part i need to not be blurry)
i need to get a snapshot of something that should be rendered to screen anyway, does that mean i have to render my scene twice every frame(one for my texture and another for the actuall render)? am i missing something here?
i have ever tried to texture the 3d cube in android using opengl es according to the example in c++, but after several times , the result is disappointed!
so i wanna know, who have ever done it before? may you give me some suggestions?
thanks in advance!
Lesson 6 on this page has a well described Android example of showing a textured cube:
http://insanitydesign.com/wp/projects/nehe-android-ports/