OpenGL (ES) -- glBindBuffer throws IllegalArgumentException: remaining < size() - android

I've made a buffer of vertices that correctly draw when using glDrawArrays, however they fail to load into a VBO. Here's the code:
FloatBuffer circleBuffer = ByteBuffer.allocateDirect(numVertices * 3 *
4).order(ByteOrder.nativeOrder()).asFloatBuffer();
for (int j = 0; j &lt numVertices; j++) {
circleBuffer.put((float) (Math.cos(theta)));
circleBuffer.put((float) (Math.sin(theta)));
circleBuffer.put(1);
theta += 2 * Math.PI / (numVertices);
}
int[] buffer = new int[1];
int circleIndex=0;
gl11.glGenBuffers(1, buffer,0);
circleIndex = buffer[0];
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, circleIndex);
gl11.glBufferData(GL11.GL_ARRAY_BUFFER, circleBuffer.capacity() * 4,
circleBuffer, GL11.GL_STATIC_DRAW);
I outputed the capacity of the buffer and it is 105, and the remaining is 0. I also tried reassigning the FloatBuffer as a Buffer. What's wrong here? Thanks!
ERROR/AndroidRuntime(7127): java.lang.IllegalArgumentException: remaining() &lt size
ERROR/AndroidRuntime(7127): at com.google.android.gles_jni.GLImpl.glBufferData(Native Method)
EDIT -- Solution
buffer.flip();

A Java exception which should be deliberately thrown by methods that don't like their parameters. It extends RuntimeException, which means it does not need to be caught.
The singular name notwithstanding, can represent unsatisfied constraint between more parameters. The more you use and check the parameters, the more you move towards exception in the method invocation proper.
In many cases, code that is throwing NullPointerException should be argument-checking and throwing this, with a decent explanitory message.

Related

Unity native OpenGL texture displayed four times

I'm currently facing a problem I simply don't understand.
I employ ARCore for an inside out tracking task. Since I need to do some additional image processing I use Unitys capability to load a native c++ plugin. At the very end of each frame I pass the image in YUV_420_888 format as raw byte array to my native plugin.
A texture handle is created right at the beginning of the components initialization:
private void CreateTextureAndPassToPlugin()
{
Texture2D tex = new Texture2D(640, 480, TextureFormat.RGBA32, false);
tex.filterMode = FilterMode.Point;
tex.Apply();
debug_screen_.GetComponent<Renderer>().material.mainTexture = tex;
// Pass texture pointer to the plugin
SetTextureFromUnity(tex.GetNativeTexturePtr(), tex.width, tex.height);
}
Since I only need the grayscale image I basically ignore the UV part of the image and only use the y coordinates as displayed in the following:
uchar *p_out;
int channels = 4;
for (int r = 0; r < image_matrix->rows; r++) {
p_out = image_matrix->ptr<uchar>(r);
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
p_out[c] = static_cast<uchar>(image_data[idx]);
p_out[c + 1] = static_cast<uchar>(image_data[idx]);
p_out[c + 2] = static_cast<uchar>(image_data[idx]);
p_out[c + 3] = static_cast<uchar>(255);
}
}
then each frame the image data is put into a GL texture:
GLuint gltex = (GLuint)(size_t)(g_TextureHandle);
glBindTexture(GL_TEXTURE_2D, gltex);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 640, 480, GL_RGBA, GL_UNSIGNED_BYTE, current_image.data);
I know that I use way too much memory by creating and passing the texture as RGBA but since GL_R8 is not supported by OpenGL ES3 and GL_ALPHA always lead to internal OpenGL errors I just pass the greyscale value to each color component.
However in the end the texture is rendered as can be seen in the following image:
At first I thought, that the reason for this may lie in the other channels having the same values, however setting all other channels than the first one to any value does not have any impact.
Am I missing something OpenGL texture creation wise?
YUV_420_888 is a multiplane texture, where the luminance plane only contains a single channel per pixel.
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
Your loop bounds assume c is in multiple of 4 channels, which is right for the output surface, but you then use it also when computing the input surface index. The input surface plane you are using only contains one channel, so idx is wrong.
In general you are also over writing the same memory multiple times - the loop increments c by one each iteration but you then write to c, c+1, c+2, and c+3 so overwrite three of the values you wrote last time.
Shorter answer - your OpenGL ES code is fine, but I think you're filling the texture with bad data.
Untested, but I think you need:
for (int c = 0; c < image_matrix->cols * channels; c += channels) {
unsigned int idx = (r * y_row_stride) + (c / channels);

PixelBuffer Object and glReadPixel on Android(ARCore) blocking

I know that the default glReadPixels() waits until all the drawing commands are executed on the GL thread. But when you bind a PixelBuffer Object and then call the glReadPixels() it should be asynchronous and will not wait for anything.
But when I bind PBO and do the glReadPixels() it is blocking for some time.
Here's how I initialize the PBO:
mPboIds = IntBuffer.allocate(2);
GLES30.glGenBuffers(2, mPboIds);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(0));
GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, mPboSize, null, GLES30.GL_STATIC_READ); //allocates only memory space given data size
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(1));
GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, mPboSize, null, GLES30.GL_STATIC_READ);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
and then I use the two buffers to ping-pong around:
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(mPboIndex)); //1st PBO
JNIWrapper.glReadPixels(0, 0, mRowStride / mPixelStride, (int)height, GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE); //read pixel from the screen and write to 1st buffer(native C++ code)
//don't load anything in the first frame
if (mInitRecord) {
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
//reverse the index
mPboIndex = (mPboIndex + 1) % 2;
mPboNewIndex = (mPboNewIndex + 1) % 2;
mInitRecord = false;
return;
}
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(mPboNewIndex)); //2nd PBO
//glMapBufferRange returns pointer to the buffer object
//this is the same thing as calling glReadPixel() without a bound PBO
//The key point is that we can pipeline this call
ByteBuffer byteBuffer = (ByteBuffer) GLES30.glMapBufferRange(GLES30.GL_PIXEL_PACK_BUFFER, 0, mPboSize, GLES30.GL_MAP_READ_BIT); //downdload from the GPU to CPU
Bitmap bitmap = Bitmap.createBitmap((int)mScreenWidth,(int)mScreenHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(byteBuffer);
GLES30.glUnmapBuffer(GLES30.GL_PIXEL_PACK_BUFFER);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
//reverse the index
mPboIndex = (mPboIndex + 1) % 2;
mPboNewIndex = (mPboNewIndex + 1) % 2;
This is called in my draw method every frame.
From my understanding the glReadPixels should not take any time at all, but it's taking around 25ms (on Google Pixel 2) and creating the bitmap takes another 40ms. This only achieve like 13 FPS which is worse than glReadPixels without PBO.
Is there anything that I'm missing or wrong in my code?
EDITED since you pointed out that my original hypothesis was incorrect (initial PboIndex == PboNextIndex). Hoping to be helpful, here is C++ code that I just wrote on the native side called through JNI from Android using GLES 3. It seems to work and not block on glReadPixels(...). Note there is only a single glPboIndex variable:
glBindBuffer(GL_PIXEL_PACK_BUFFER, glPboIds[glPboIndex]);
glReadPixels(0, 0, frameWidth_, frameHeight_, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glPboReady[glPboIndex] = true;
glPboIndex = (glPboIndex + 1) % 2;
if (glPboReady[glPboIndex]) {
glBindBuffer(GL_PIXEL_PACK_BUFFER, glPboIds[glPboIndex]);
GLubyte* rgbaBytes = (GLubyte*)glMapBufferRange(
GL_PIXEL_PACK_BUFFER, 0, frameByteCount_, GL_MAP_READ_BIT);
if (rgbaBytes) {
size_t minYuvByteCount = frameWidth_ * frameHeight_ * 3 / 2; // 12 bits/pixel
if (videoFrameBufferSize_ < minYuvByteCount) {
return; // !!! not logging error inside render loop
}
convertToVideoYuv420NV21FromRgbaInverted(
videoFrameBufferAddress_, rgbaBytes,
frameWidth_, frameHeight_);
}
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glPboReady[glPboIndex] = false;
}
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
...
previous unfounded hypothesis:
Your question doesn't show the code that sets the initial values of mPboIndex and mPboNewIndex, but if they are set to identical initial values, such as 0, then they will have matching values within each loop which will result in mapping the same PBO that has just been read. In that hypothetical/real scenario, even if 2 PBOs are being used, they are not alternated between glReadPixels and glMapBufferRange which will then block until the GPU completes data transfer. I suggest this change to ensure that the PBOs alternate:
mPboNewIndex = mPboIndex;
mPboIndex = (mPboNewIndex + 1) % 2;

Cross correlation to find sonar echoes

I'm trying to detect echoes of my chirp in my sound recording on Android and it seems cross correlation is the most appropriate way of finding where the FFTs of the two signals are similar and from there I can identify peaks in the cross correlated array which will correspond to distances.
From my understanding, I have come up with the following cross correlation function. Is this correct? I wasn't sure whether to add zeros to the beginning as and start a few elements back?
public double[] xcorr1(double[] recording, double[] chirp) {
double[] recordingZeroPadded = new double[recording.length + chirp.length];
for (int i = recording.length; i < recording.length + chirp.length; ++i)
recordingZeroPadded[i] = 0;
for (int i = 0; i < recording.length; ++i)
recordingZeroPadded[i] = recording[i];
double[] result = new double[recording.length + chirp.length - 1];
for (int offset = 0; offset < recordingZeroPadded.length - chirp.length; ++offset)
for (int i = 0; i < chirp.length; ++i)
result[offset] += chirp[i] * recordingZeroPadded[offset + i];
return result;
}
Secondary question:
According to this answer, it can also be calculated like
corr(a, b) = ifft(fft(a_and_zeros) * fft(b_and_zeros[reversed]))
which I don't understand at all but seems easy enough to implement. That said I have failed (assuming my xcorr1 is correct). I feel like I've completely misunderstood this?
public double[] xcorr2(double[] recording, double[] chirp) {
// assume same length arguments for now
DoubleFFT_1D fft = new DoubleFFT_1D(recording.length);
fft.realForward(recording);
reverse(chirp);
fft.realForward(chirp);
double[] result = new double[recording.length];
for (int i = 0; i < result.length; ++i)
result [i] = recording[i] * chirp[i];
fft.realInverse(result, true);
return result;
}
Assuming I got both working, which function would be most appropriate given that the arrays will contain a few thousand elements?
EDIT: Btw, I have tried adding zeros to both ends of both arrays for the FFT version.
EDIT after SleuthEye's response:
Can you just verify that, because I'm dealing with 'actual' data, I need only do half the computations (the real parts) by doing a real transform?
From your code, it looks as though the odd numbered elements in the array returned by the REAL transform are imaginary. What's going on here?
How am I going from an array of real numbers to complex? Or is this the purpose of a transform; to move real numbers into the complex domain? (but the real numbers are just a subset of the complex numbers and so wouldn't they already be in this domain?)
If realForward is in fact returning imaginary/complex numbers, how does it differ to complexForward? And how do I interpret the results? The magnitude of the complex number?
I apologise for my lack of understanding with regard to transforms, I have only so far studied fourier series.
Thanks for the code. Here is 'my' working implementation:
public double[] xcorr2(double[] recording, double[] chirp) {
// pad to power of 2 for optimisation
int y = 1;
while (Math.pow(2,y) < recording.length + chirp.length)
++y;
int paddedLength = (int)Math.pow(2,y);
double[] paddedRecording = new double[paddedLength];
double[] paddedChirp = new double[paddedLength];
for (int i = 0; i < recording.length; ++i)
paddedRecording[i] = recording[i];
for (int i = recording.length; i < paddedLength; ++i)
paddedRecording[i] = 0;
for (int i = 0; i < chirp.length; ++i)
paddedChirp[i] = chirp[i];
for (int i = chirp.length; i < paddedLength; ++i)
paddedChirp[i] = 0;
reverse(chirp);
DoubleFFT_1D fft = new DoubleFFT_1D(paddedLength);
fft.realForward(paddedRecording);
fft.realForward(paddedChirp);
double[] result = new double[paddedLength];
result[0] = paddedRecording[0] * paddedChirp[0]; // value at f=0Hz is real-valued
result[1] = paddedRecording[1] * paddedChirp[1]; // value at f=fs/2 is real-valued and packed at index 1
for (int i = 1; i < result.length / 2; ++i) {
double a = paddedRecording[2*i];
double b = paddedRecording[2*i + 1];
double c = paddedChirp[2*i];
double d = paddedChirp[2*i + 1];
// (a+b*j)*(c-d*j) = (a*c+b*d) + (b*c-a*d)*j
result[2*i] = a*c + b*d;
result[2*i + 1] = b*c - a*d;
}
fft.realInverse(result, true);
// discard trailing zeros
double[] result2 = new double[recording.length + chirp.length - 1];
for (int i = 0; i < result2.length; ++i)
result2[i] = result[i];
return result2;
}
However, until about 5000 elements each, xcorr1 seems to be quicker. Am I doing anything particularly slow (perhaps the constant 'new'ing of memory -- maybe I should cast to an ArrayList)? Or the arbitrary way in which I generated the arrays to test them? Or should I do the conjugates instead of reversing it? That said, performance isn't really an issue so unless there's something obvious you needn't bother pointing out optimisations.
Your implementation of xcorr1 does correspond to the standard signal-processing definition of cross-correlation.
Relative to your interrogation with respect to adding zeros at the beginning: adding chirp.length-1 zeros would make index 0 of the result correspond to the start of transmission. Note however that the peak of the correlation output occurs chirp.length-1 samples after the start of echoes (the chirp has to be aligned with the full received echo). Using the peak index to obtain echo delays, you would then have to adjust for that correlator delay either by subtracting the delay or by discarding the first chirp.length-1 output results. Noting that the additional zeros correspond to that many extra outputs at the beginning, you'd probably be better off not adding those zeros at the beginning in the first place.
For xcorr2 however, a few things need to be addressed. First, if the recording and chirp inputs are not already zero-padded to at least chirp+recording data length you would need to do so (preferably to a power of 2 length for performance reasons). As you are aware, they would both need to be padded to the same length.
Second, you didn't take into account that the multiplication indicated in the posted reference answer, correspond in fact to complex multiplications (whereas DoubleFFT_1D.realForward API uses doubles). Now if you are going to implement something such as a complex multiplication with the chirp's FFT, you might as well actually implement the multiplication with the complex conjugate of the chirp's FFT (the alternate implementation indicated in the reference answer), removing the need to reverse the time-domain values.
Also accounting for DoubleFFT_1D.realForward packing order for even length transforms, you would get:
// [...]
fft.realForward(paddedRecording);
fft.realForward(paddedChirp);
result[0] = paddedRecording[0]*paddedChirp[0]; // value at f=0Hz is real-valued
result[1] = paddedRecording[1]*paddedChirp[1]; // value at f=fs/2 is real-valued and packed at index 1
for (int i = 1; i < result.length/2; ++i) {
double a = paddedRecording[2*i];
double b = paddedRecording[2*i+1];
double c = paddedChirp[2*i];
double d = paddedChirp[2*i+1];
// (a+b*j)*(c-d*j) = (a*c+b*d) + (b*c-a*d)*j
result[2*i] = a*c + b*d;
result[2*i+1] = b*c - a*d;
}
fft.realInverse(result, true);
// [...]
Note that the result array would be of the same size as paddedRecording and paddedChirp, but only the first recording.length+chirp.length-1 should be kept.
Finally, relative to which function is the most appropriate for arrays of a few thousand elements, the FFT version xcorr2 is likely going to be much faster (provided you restrict array lengths to powers of 2).
The direct version doesn't require zero-padding first. You just take recording of length M and chirp of length N and calculate result of length N+M-1. Work through a tiny example by hand to grok the steps:
recording = [1, 2, 3]
chirp = [4, 5]
1 2 3
4 5
1 2 3
4 5
1 2 3
4 5
1 2 3
4 5
result = [1*5, 1*4 + 2*5, 2*4 + 3*5, 3*4] = [5, 14, 23, 4]
The FFT method is much faster if you have long arrays. In this case you have to zero-pad each input to size M+N-1 so that both input arrays are the same size before taking the FFT.
Also, the FFT output is complex numbers, so you need to use complex multiplication. (1+2j)*(3+4j) is -5+10j, not 3+8j. I don't know how your complex numbers are arranged or handled, but make sure this is right.
Or is this the purpose of a transform; to move real numbers into the complex domain?
No, the Fourier transform transforms from the time domain to the frequency domain. The time domain data can be either real or complex, and the frequency domain data can be either real or complex. In most cases you have real data with a complex spectrum. You need to read up on the Fourier transform.
If realForward is in fact returning imaginary/complex numbers, how does it differ to complexForward?
The real FFT takes a real input, while the complex FFT takes a complex input. Both transforms produce complex numbers as their output. That's what the DFT does. The only time a DFT produces real output is if the input data is symmetrical (in which case you can use the DCT to save even more time).

glReadPixels failed with format GL_ALPHA

I wanna draw font on the android game by the freetype library. Get the glyph texture by the library and upload to the FBO, which i used to rendering the string label;
when i run this code, it would be ok, and i get the excepted data, the font shows ok,
for (int j = 0; j < height; j ++) {
glReadPixels ( 0, j, width, 1,
GL_RGBA, GL_UNSIGNED_BYTE, data + j*bytesPerRow);
}
But after i change the format to GL_ALPHA, it is always return 0 on the android device,
and the gl error log: got error: 0x500, so it means ,i can't read the pixels by GL_ALPHA?
the wrong code as:
for (int j = 0; j < height; j ++) {
glReadPixels ( 0, j, width, 1,
GL_ALPHA, GL_UNSIGNED_BYTE, data + j*bytesPerRow);
}
i don't know why, any help?
OpenGL ES is only required to support 2 format / data type pairs in a call to glReadPixels (...).
GL_RGBA, GL_UNSIGNED_BYTE (you already know this one)
Query: GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE
You have discovered unfortunately that GL_ALPHA, GL_UNSIGNED_BYTE is NOT the second supported format / data type pair.
To figure out what the second supported pair is, consider the following code:
GLint imp_fmt, imp_type;
glGetIntegerv (GL_IMPLEMENTATION_COLOR_READ_FORMAT, &imp_fmt);
glGetIntegerv (GL_IMPLEMENTATION_COLOR_READ_TYPE, &imp_type);
printf ("Supported Color Format/Type: %x/%x\n", imp_fmt, imp_type);
You will have to adjust the code accordingly, since this is C and you are using Java... but you get the idea.
Chances are very good that your implementation does not have a single-channel format for use with glReadPixels (...) considering there is no single-channel color-renderable format without the extension: GL_EXT_texture_rg.

Android and OpenGL "how display faces"

I want to display for example an *.obj file.
and normal, in OpenGL I use instruction :
glBegin(Traing..);
glVertex3f(Face[i].VertexIndex);
glTexcoords2f(Face[i].TexcoordIndex);
glNormal(Face[i].NormalIndex);
glEnd();
But in Android OpenGL i don't have this functions...
i have an DrawElements(...);
but when I want draw face 34/54/3 ( vertex/texcord/normal index of arrays)
it's drawing linear 34/34/34...
so how I can draw a *.obj file?
I search in the web and I found this topic :
http://www.anddev.org/android-2d-3d-graphics-opengl-problems-f55/obj-import-to-opengl-trouble-t48883.html So.. I writing an Model editor in C# to my game and I wrote something like that for test :
public void display2()
{
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableClientState(ArrayCap.TextureCoordArray);
GL.EnableClientState(ArrayCap.NormalArray);
double[] vertexBuff = new double[faces.Count * 3 * 3];
double[] normalBuff = new double[faces.Count * 3 * 3];
double[] texcorBuff = new double[faces.Count * 3 * 2];
foreach (face f in faces)
{
for (int i = 0; i < f.vector.Length; i++)
{
vertexBuff[i_3] = mesh[f.vector[i]].X;
vertexBuff[i_3 + 1] = mesh[f.vector[i]].Y;
vertexBuff[i_3 + 2] = mesh[f.vector[i]].Z;
normalBuff[i_3] = normal[f.normal[i]].X;
normalBuff[i_3 + 1] = normal[f.normal[i]].Y;
normalBuff[i_3 + 2] = normal[f.normal[i]].Z;
texcorBuff[i_2] = texture[f.texCord[i]].X;
texcorBuff[i_2 + 1] = texture[f.texCord[i]].Y;
i_3 += 3;
i_2 += 2;
}
}
GL.VertexPointer<double>(3, VertexPointerType.Double, 0, vertexBuff);
GL.TexCoordPointer<double>(2, TexCoordPointerType.Double, 0, texcorBuff);
GL.NormalPointer<double>(NormalPointerType.Double, 0, normalBuff);
GL.DrawArrays(BeginMode.Triangles, 0, faces.Count * 3);
GL.DisableClientState(ArrayCap.VertexArray);
GL.DisableClientState(ArrayCap.TextureCoordArray);
GL.DisableClientState(ArrayCap.NormalArray);
}
and it's working.. but I think that this could be more optimized?...
I don't want to change my data of model to the arraysbuffer,
because it takes too much space in memory.. any suggestion?
I'm not an Android programmer but I assume it uses OpenGL-ES in which these functions are deprecated (and by the way missing).
Tutorials explaining the good solution are drawn amongst a bunch of others that show how to draw triangles with glVertex3f functions (because it gives easy and fast results but totally pointless). I find it tragic since NOBODY should use those things.
glBegin/glEnd, glVertex3f, glTexcoords2f, and such functions are now deprecated for performance sake (they are "slow" because we have to limit the number of calls to the graphic library). I won't expand much on that since you can search for it if interested.
Instead, make use of Vertex and Indices buffers. I'm sorry because I have no "perfect" link to recommend, but you should easily get what you need on google :)
However, I dug up some come from an ancient C# project:
Note: OpenTK binding change functions name but they remain very close to the OGL ones, for example glVertex3f becomes GL.Vertex3.
The Vertex definition
A simple struct to store your custom vertex's informations (position, normal (if needed), color...)
[System.Runtime.InteropServices.StructLayout(System.Runtime.InteropServices.LayoutKind.Sequential, Pack = 1)]
public struct Vertex
{
public Core.Math.Vector3 Position;
public Core.Math.Vector3 Normal;
public Core.Math.Vector2 UV;
public uint Coloring;
public Vertex(float x, float y, float z)
{
this.Position = new Core.Math.Vector3(x, y, z);
this.Normal = new Core.Math.Vector3(0, 0, 0);
this.UV = new Core.Math.Vector2(0, 0);
System.Drawing.Color color = System.Drawing.Color.Gray;
this.Coloring = (uint)color.A << 24 | (uint)color.B << 16 | (uint)color.G << 8 | (uint)color.R;
}
}
The Vertex Buffer class
It's a wrapper class around an OpenGL buffer object to handle our vertex format.
public class VertexBuffer
{
public uint Id;
public int Stride;
public int Count;
public VertexBuffer(Graphics.Objects.Vertex[] vertices)
{
int size;
// We create an OpenGL buffer object
GL.GenBuffers(1, out this.Id); //note: out is like passing an object by reference in C#
this.Stride = OpenTK.BlittableValueType.StrideOf(vertices); //size in bytes of the VertexType (Vector3 size*2 + Vector2 size + uint size)
this.Count = vertices.Length;
// Fill the buffer with our vertices data
GL.BindBuffer(BufferTarget.ArrayBuffer, this.Id);
GL.BufferData(BufferTarget.ArrayBuffer, (System.IntPtr)(vertices.Length * this.Stride), vertices, BufferUsageHint.StaticDraw);
GL.GetBufferParameter(BufferTarget.ArrayBuffer, BufferParameterName.BufferSize, out size);
if (vertices.Length * this.Stride != size)
throw new System.ApplicationException("Vertex data not uploaded correctly");
}
}
The Indices Buffer class
Very similar to the vertex buffer, it stores vertex indices of each face of your model.
public class IndexBuffer
{
public uint Id;
public int Count;
public IndexBuffer(uint[] indices)
{
int size;
this.Count = indices.Length;
GL.GenBuffers(1, out this.Id);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, this.Id);
GL.BufferData(BufferTarget.ElementArrayBuffer, (System.IntPtr)(indices.Length * sizeof(uint)), indices,
BufferUsageHint.StaticDraw);
GL.GetBufferParameter(BufferTarget.ElementArrayBuffer, BufferParameterName.BufferSize, out size);
if (indices.Length * sizeof(uint) != size)
throw new System.ApplicationException("Indices data not uploaded correctly");
}
}
Drawing buffers
Then, to render a triangle, you have to create one Vertex Buffer to store vertices' positions. One Indice buffer containing the indices of the vertices [0, 1, 2] (pay attention to the counter-clockwise rule, but it's the same with glVertex3f method)
When done, just call this function with specified buffers. Note you can use multiple sets of indices whith only one vertex buffer to render only some faces each time.
void DrawBuffer(VertexBuffer vBuffer, IndexBuffer iBuffer)
{
// 1) Ensure that the VertexArray client state is enabled.
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableClientState(ArrayCap.NormalArray);
GL.EnableClientState(ArrayCap.TextureCoordArray);
// 2) Bind the vertex and element (=indices) buffer handles.
GL.BindBuffer(BufferTarget.ArrayBuffer, vBuffer.Id);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, iBuffer.Id);
// 3) Set up the data pointers (vertex, normal, color) according to your vertex format.
GL.VertexPointer(3, VertexPointerType.Float, vBuffer.Stride, new System.IntPtr(0));
GL.NormalPointer(NormalPointerType.Float, vBuffer.Stride, new System.IntPtr(Vector3.SizeInBytes));
GL.TexCoordPointer(2, TexCoordPointerType.Float, vBuffer.Stride, new System.IntPtr(Vector3.SizeInBytes * 2));
GL.ColorPointer(4, ColorPointerType.UnsignedByte, vBuffer.Stride, new System.IntPtr(Vector3.SizeInBytes * 3 + Vector2.SizeInBytes));
// 4) Call DrawElements. (Note: the last parameter is an offset into the element buffer and will usually be IntPtr.Zero).
GL.DrawElements(BeginMode.Triangles, iBuffer.Count, DrawElementsType.UnsignedInt, System.IntPtr.Zero);
//Disable client state
GL.DisableClientState(ArrayCap.VertexArray);
GL.DisableClientState(ArrayCap.NormalArray);
GL.DisableClientState(ArrayCap.TextureCoordArray);
}
I hope this can help ;)
See this tutorial on glVertex arrays

Categories

Resources