I wanna render wave which shows frequency data of audio.
I have data point of 150 points/second.
I have rendered it using canvas,showing line for each data value. so I show 150 lines for 1 second of song, its showing in right way but when we scroll the view, its lagging.
Is there any Library which can render the data points using openGL, canvas or using any other method which will be smooth while scrolling.
These are two waves. Each line represent one data point, with minimum value zero and maximum value will be highest value in data set.
How to render this wave in OpenGL or using any other library because Its lagging in Scrolling if rendered using canvas.
maybe you could show an example of how it looks like. How do you create the lines? Are the points scattered? Do you have to connect them or do you have a fixed point?
Usually in OpenGL-ES the process would looks like:
- read in your data of audio
- sort them so that OpenGL knows how to connect them
- upload them to your vertexShader
I would really recommend this tutorial. I don't know your OpenGL background, thus this is a perfect tool to start it.
Actually, your application shouldn't be too complicated and the tutorial should offer you enough information. In the case, you want to visualize each second with 150 points
Just a small overview
Learn how to set up a window with OpenGL
You described a 2d application
-define x values as eg. -75 to 75
-define y values as your data
define Lines as x,y dataSet
Use to draw
glBegin(GL_Lines)
glVertexf((float)x value of each line,(float)y small value of each line);
glVertexf((float)x value of each line,(float)y high value of each line);
glEnd();
If you have to use mobile graphics you need shaders because OpenGLES only support shader in GLSL
define your OpenGL camera!
Related
I'm trying to implement deferred rendering on an Android phone using OPENGL ES 3.0. I've gotten this to work ok but only very slowly, which rather defeats the whole point. What really slows things up is the multiple calls to the shaders. Here, briefly, is what my code does:
Geometry Pass:
Render scene - output position, normal and colour to off-screen buffers.
For each light:
a) Stencil Pass:
Render a sphere at the current light position, sized according to the lights intensity. Mark these pixels as influenced by current light. No actual output.
b) Light Pass:
Render a sphere again, this time using the data from the geometry pass to apply lighting equations to pixels marked in the previous step. Add this to off-screen buffer
Blit to screen
It's this restarting the shaders for each light causing the bottleneck. For example, with 25 lights the above steps run at about 5 fps. If instead I do: Geometry Pass / Stencil Pass - draw 25 lights / Light Pass - draw 25 lights it runs at around 30 fps. So, does anybody know how I can avoid having to re-initialize the shaders? Or, in fact, just explain what's taking up the time? Would it help or even be possible (and I'm sorry if this sounds daft) to keep the shader 'open' and overwrite the previous data rather than doing whatever it is that takes so much time restarting the shader? Or should I give this up as a method for having multiple lights, on a mobile devise anyway.
Well, I solved the problem of having to swap shaders for each light by using an integer texture as a stencil map, where a certain bit is set to represent each light. (So, limited to 32 lights.) This means step 2a (above) can be looped, then a single change of shader, and looping step 2b. However, (ahahaha!) it turns out that this didn't really speed things up as it's not, after all, swapping shaders that's the problem but changing write destination. That is, multiple calls to glDrawBuffers. As I had two such calls in the stencil creation loop - one to draw nowhere when drawing a sphere to calculate which pixels are influenced and one to draw to the integer texture used as the stencil map. I finally realized that as I use blending (each write with a colour where a singe bit is on) it doesn't matter if I write at the pixel calculation stage, so long as it's with all zeros. Getting rid of the unnecessary calls to glDrawBuffers takes the FPS from single figures to the high twenties.
In summary, this method of deferred rendering is certainly faster than forward rendering but limited to 32 lights.
I'd like to say that me code was written just to see if this was a viable method and many small optimizations could be made. Incidentally, as I was limited to 4 draw buffers, I had to scratch the position map and instead recover this from gl_FragCoord.xyz. I don't have proper benchmarking tools so I'd be interested to hear from anyone who can tell me what difference this makes, speedwise.
I'm making an Android app and I need to draw some shapes using OpenGL ES. I'm able to render them but I'm disappointed with performance. I updated the code to use VBO but I didn't notice any improvement. I want to render at 60 frames per second (16 ms per frame).
I have a test project where I render several triangles on the screen. When I render 1000 triangles it takes about 20 ms per frame (depending on the device).
I want to keep the rendering under 10 ms because I need the rest (6 ms) to perform other calculations (e.g. update positions, detect collisions, etc.).
Here is the code where I render a triangle:
https://github.com/mauriciotogneri/test/blob/master/src/com/testopengl/Polygon.java#L51-66
Here is the code where iterate over the triangles:
https://github.com/mauriciotogneri/test/blob/master/src/com/testopengl/MapRenderer.java#L117-139
(Change the value of NUMBER_OF_TRIANGLES to display more triangles)
For what I understand, the method GLES20.glDrawArrays(...) takes too much time if I need to call it 1000 times per frame (one per triangle).
Is there another way to render several polygons that doesn't take too much?
Notes:
In the example all the triangles have a fixed position on the screen but in the real scenario they will move around
In the example I assign a random color to each triangle but in the real scenario each of them will have a fixed color
put your positions/colors/normals ... in One VBO object and Draw them in one call.
I've been doing some game development on Android and have been able to accomplish the majority of my drawing simply by using the glDrawTexfOES method from the GL extensions library. Using this for drawing my sprites seemed to yield good performance and I didn't have any complaints until I started trying to use it for bitmap fonts.
With the way I have my bitmap fonts set up right now, I read in the character definitions, texture, and character properties from an XML file in order to initialize my font class. Using these properties, a call is made to glDrawTexfOES and is formatted so that the requested character is drawn and scaled to the desired size. This works fine for smaller strings, but unfortunately requires a separate call to glDrawTexfOES for every single character drawn. As you can imagine, this causes noticeable lag and performance issues for larger strings.
Does anyone have advice for how I could render this more intelligently? I've heard about using VBOs for large groups of static objects, but I'm not sure if these are appropriate for the use case of having text that needs to be dynamic as well. Advice from someone who's implemented something similar with OpenGL ES would be much appreciated.
There are several ways to improve the redering speed of your code. The documentation of glDrawTexfOES doesn't offer many details and only mentions a direct mapping of texels to fragments, so I my best guess is that the implementation you are using is not very optimized and is the main cause of your speed problems (I'm assuming your are using OpenGL ES 1.1).
My suggestion is to get rid of glDrawTexfOES and replace it with a custom rendering function using triangle strips and degenerate triangles to connect different strips.
It works like this:
1) Create a buffer to store the vertex position, texture coordinates and indices to draw the triangles in a single batch. Each letter will be draw using two triangles and an additional degenerate triangle will be used (or ignored) to connect it to another letter.
2) Create degenerate triangles by using the following algorithm:
for (int k = 0; k < MAX_NUMBER_OF_LETTERS ; k++)
{
indices[k*6] = k*4;
for (int p=0; p<4; p++)
indices[k*6+p+1] = k*4+p;
indices[k*6+5] = k*4+3;
}
3) Batch all your draw calls into the buffer you created in step 1. You can send the triangles to the rendering pipeline using glDrawArrays. You can batch as many triangles as you want, the trick to speed up the rendering is to hold the call to glDrawArrays until there's going to be a change in the state of OpenGL (like changing the binded texture).
4) You obviously need to use a texture atlas.
I also suggest not doing VBOs if your are doing some basic 2D stuff. Going that route means changing your code to OpenGL ES 2.0 instead of 1.1 and they are completely different.
I am currently playing around with OpenGL ES for Android. I have decided to set myself the task of making a chair. Do I have to code each individual vertex or is there a way to multiply one set and transform them 10 units to the left.
For example, instead of having to code out each leg, can I multiply one into four and have them at different postions?
And if so is this possible outside of the rendering class?
You can do this fairly easily using glTranslate() function between each time you draw a chair leg. If you imagine drawing on a piece of paper where your hand is locked in position and can only draw the same chair leg in the same place each time, glTranslate() moves the piece of paper under your hand between drawing each chair leg.
However, for most complex models like a chair, you may want to consider making them using a 3D modelling software package, such as blender (which is free). When you save it as a file, the file actally contains all the vertices. Depending on which file format you save as, you can then write some code to load the file, parse it to extract the vertices, and then use those vertices to draw the chair.
I want to move an image in in 3 dimensional way in my android application according to my device movement, for this, I am getting my x y z co-ordinate values through sensorEvent,But I am unable to find APIs to move image in 3 dimesions. Could any one please provide a way(any APIs) to get the solution.
Depending on the particulars of your application, you could consider using OpenGL ES for manipulations in three dimensions. A quite common approach then would be to render the image onto a 'quad' (basically a flat surface consisting of two triangles) and manipulate that using matrices you construct based on the accelerometer data.
An alternative might be to look into extending the standard ImageView, which out of the box supports manipulations by 3x3 matrices. For rotation this will be sufficient, but obviously you will need an extra dimension for translation - which you're probably after, seen your remark about 'moving' an image.
If you decide to go with the first suggestion, this example code should be quite useful to start with. You'll probably be able to plug your sensor data straight into that and simply add the required math for the matrix manipulations.