I am currently playing around with OpenGL ES for Android. I have decided to set myself the task of making a chair. Do I have to code each individual vertex or is there a way to multiply one set and transform them 10 units to the left.
For example, instead of having to code out each leg, can I multiply one into four and have them at different postions?
And if so is this possible outside of the rendering class?
You can do this fairly easily using glTranslate() function between each time you draw a chair leg. If you imagine drawing on a piece of paper where your hand is locked in position and can only draw the same chair leg in the same place each time, glTranslate() moves the piece of paper under your hand between drawing each chair leg.
However, for most complex models like a chair, you may want to consider making them using a 3D modelling software package, such as blender (which is free). When you save it as a file, the file actally contains all the vertices. Depending on which file format you save as, you can then write some code to load the file, parse it to extract the vertices, and then use those vertices to draw the chair.
Related
I wanna render wave which shows frequency data of audio.
I have data point of 150 points/second.
I have rendered it using canvas,showing line for each data value. so I show 150 lines for 1 second of song, its showing in right way but when we scroll the view, its lagging.
Is there any Library which can render the data points using openGL, canvas or using any other method which will be smooth while scrolling.
These are two waves. Each line represent one data point, with minimum value zero and maximum value will be highest value in data set.
How to render this wave in OpenGL or using any other library because Its lagging in Scrolling if rendered using canvas.
maybe you could show an example of how it looks like. How do you create the lines? Are the points scattered? Do you have to connect them or do you have a fixed point?
Usually in OpenGL-ES the process would looks like:
- read in your data of audio
- sort them so that OpenGL knows how to connect them
- upload them to your vertexShader
I would really recommend this tutorial. I don't know your OpenGL background, thus this is a perfect tool to start it.
Actually, your application shouldn't be too complicated and the tutorial should offer you enough information. In the case, you want to visualize each second with 150 points
Just a small overview
Learn how to set up a window with OpenGL
You described a 2d application
-define x values as eg. -75 to 75
-define y values as your data
define Lines as x,y dataSet
Use to draw
glBegin(GL_Lines)
glVertexf((float)x value of each line,(float)y small value of each line);
glVertexf((float)x value of each line,(float)y high value of each line);
glEnd();
If you have to use mobile graphics you need shaders because OpenGLES only support shader in GLSL
define your OpenGL camera!
I was working on a vector based mobile app. First I've started using polygons to represent Curves. However, I was reaching quickly the Polygon Limit on mobile phones. To overcome this limit, I've started using a Texture and was coloring the pixels. Even though this was a pretty easy solution, I was limited by the max resolution of textures and operations.
The only promising thing I've found was OpenVG, but it seems like it is not very popular.
So how are vector drawing apps on mobile phones created? I was stunned by Adobe Illustrator mobile, which seems to be able to draw limitless curves/lines in vector graphics.
One possible method to allow vector based free form curve drawing is to use Bézier curves. Bezier curves are constructed by interpolating values defined by a start point, an end point and any number of control points. This results in the ability to construct a free form curve from a set of as little as 3 cartesian points.
A benefit of this is that by only storing tthe point data representing the curve, you can then render the same curve at any scale without having to store the curve in a texture. Therefore you do not need to store hundreds of intermediate points to form small line segments to represent the same curve.
Here are a number of sample code snippets that use the Path object in Android to construct free form vector curves.
If you have a very large number of curves to render to the canvas, you only need to store the point data defining the bezier curve. It is important that you only create the Path object once, and use the reset method to redefine the points each time you would like to draw a new curve. Sample code to achieve this can be found here.
I'm trying to make a 2d map (for a game, think tiled world map) in OpenGL ES 2.0 for an android game. Basically, there are a few tile types that have different textures, and the map is randomly generated from these types, so from game-to-game the map changes but for the duration of a single game it stays the same.
My first thought was to generate a single large texture / image / bitmap (independent from OpenGL) beforehand basically stitching duplicate tile textures together to make the larger map, and then using this single texture for one large map rectangle. In theory I think this is simple and would work fine, but I'm worried that it won't scale well for larger maps and especially on mobile I'll run out of memory with such a large image map. Plus, there's a small set of tiles that are duplicated over and over so it seems like a tremendous waste to duplicate the pixel data in a big texture over and over.
My second thought was having many textures, one for each of the tile textures. But I'm not sure how this would work, texture-binding-wise, would I need the shaders to contain multiple texture references and within the shader have logic for using the right one?
Finally, I thought using a texture atlas could work, have one texture / image with all of the tile data in it, this would be relatively small. But I'm struggling to imagine how to get the maths to work out such that "tiles" or subsections of the map rectangle would use completely different texture coordinates.
Am I approaching this the wrong way? Should I be using a rectangle for each tile? At least this way I can pass the shaders both vertex and texture coordinates for each tile independently. This seems easier, but also seems wrong since the map really is just one rectangle that won't be changing.
My first thought was to generate a single large texture...
Actualy, something like this has already been used in id Software's id Tech since version 4. It's called MegaTexture. Basicaly, it's a big texture, which could also hold additional data.
My second thought was having many textures...
You don't need to hold all the textures in a shader. Do it like this:
Implement a loop with n iterations, where n is how much different types of textures are used.
Inside a loop, bind the current texture type.
Pass any data, like position/color/texture coords to shaders.
Draw all tiles that use the bounded texture. You could use GLES30.glDrawElementsInstanced or GLES30.glDrawArraysInstanced if you are targeting devices with GLES 3.x or an appropriate extension support. Otherwise, draw your tiles using GLES20.glDrawArrays or GLES20.glDrawElements.
Shaders won't be complicated with this approach.
Finally, I thought using a texture atlas could work...
You could use loop here too and compute the texture coordinates for each tile type on CPU, then just pass them to shaders.
Considering your map is not changing through a game session, MegaTexture approach looks good. However, it depends on how large your map is and how much memory is available. Also, note that max texture size is limited. Max size differs from device to device but should be (AFAIK) equal or greater than screen size and at least 64 texels(16 for cube-mapped textures). You can get the maximum texture size on any device using glGet(GL_MAX_TEXTURE_SIZE ).
I have imported a model (e.g. a teapot) using Rajawali into my scene.
What I would like is to label parts of the model (e.g. the lid, body, foot, handle and the spout)
using plain Android views, but I have no idea how this could be achieved. Specifically, positioning
the labels on the right place seems challenging. The idea is that when I transform my model's position in the scene, the tips of the labels are still correctly positioned
Rajawali tutorial show how Android views can be placed on top of the scene here https://github.com/Rajawali/Rajawali/wiki/Tutorial-08-Adding-User-Interface-Elements
. I also understand how using the transformation matrices a 3D coordinate on the model can be
transformed into a 2D coordinate on the screen, but I have no idea how to determine the exact 3D coordinates
on the model itself. The model is exported to OBJ format using Blender, so I assume there is some clever way of determining
the coordinates in Blender and exporting them to a separate file or include them somehow in the OBJ file (but not
render those points, only include them as metadata), but I have no idea how I could do that.
Any ideas are very appreciated! :)
I would use a screenquad, not a view. This is a general GL solution, and will also work with iOS.
You must determine the indices of the desired model vertices. Using the text rendering algo below, you can just fiddle them until you hit the right ones.
Create a reasonable ARGB bitmap with same aspect ratio as the screen.
Create the screenquad texture using this bitmap
Create a canvas using this bitmap
The rest happens in onDrawFrame(). Clear the canvas using clear paint.
Use the MVP matrix to convert desired model vertices to canvas coordinates.
Draw your desired text at the canvas coordinates
Update the texture.
Your text will render very precisely at the vertices you specfied. The GL thread will double-buffer and loop you back to #4. Super smooth 3D text animation!
Use double floating point math to avoid loss of precision during coordinate conversion, which results in wobbly text. You could even use the z value of the vertex to scale the text. Fancy!
The performance bottleneck is #7 since the entire bitmap must be copied to GL texture memory, every frame. Try to keep the bitmap as small as possible, maintaining aspect ratio. Maybe let the user toggle the labels.
Note that the copy to GL texture memory is redundant since in OpenGL-ES, GL memory is just regular memory. For compatibility reasons, a redundant chunk of regular memory is reserved to artificially enforce the copy.
Although I'm technically working in the android platform with OpenGL 2.0 ES, I believe this can be applied to more OpenGL technologies.
I have a list of objects (enemies, characters, etc) that I'm attempting to draw onto a grid, each space being 1x1, and each object matching. Presently, each object is self translating... that is, it's taking its model coordinates and going through a simple loop to adjust them to be located in the world coordinates in its appropriate grid location. (i.e. if it should be at (3,2) it will translate it's coordinates accordingly.
The problem I've reached is I'm not sure how to effeciently draw them. I have a loop going through all the objects and calling draw for each object, similar to the android tutorial, but this seems wildly ineffecient.
The objects are each textured with their own square images, matching the 1x1 grid they fill. They likely will never need their own unique shaders, so the only thing that seems to change between objects is the verticies and the shaders.
Is there an effecient way to get each model into the pipeline without flushing because of uniform changes?
This probably requires some try and error procedure an probably is hardware dependent. I would use buffer objects for the meshes with GL_STATIC_DRAW, pack some textures in a bigger one and draw all objects depending on that bigger texture in batch to avoid states changes as much as possible. Profile and get us more information on where is your bottleneck.