Can't draw loaded models in OpenGL ES 1.x with C++ - android

I load obj models and try to render them with OpenGL ES using Android NDK:
class ObjModel{
public:
ObjModel();
~ObjModel();
int numVertex, numNormal,numTexCoord, numTriange;
float *vertexArray;
float *normalArray;
float *texCoordArray;
unsigned short *indexArray;
void loadModel(string fileName);
};
model->loadModel(filename);
glVertexPointer(3, GL_FLOAT, 0, &(model->vertexArray[0]));
glNormalPointer(GL_FLOAT, 0, &(model->normalArray[0]));
glDrawElements(GL_TRIANGLES, model->numTriange, GL_UNSIGNED_SHORT,
&(model->indexArray[0]));
Model is not rendered fully, I see only part of it.
I checked the data in arrays and they are parsed properly. I think that the only issue might be with passing arguments. Am I doing it right?

Hope this helps! I think you are just missing the number 3!
glDrawElements(GL_TRIANGLES, 3 * model->numTriange, GL_UNSIGNED_SHORT,
&(model->indexArray[0]));

Related

android - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.

GL_LUMINANCE with byte array on OpenGL ES 2.0

I'm programming combining the yuv data which were got by libvpx(WebM decode library) and OpenGL ES 2.0 shader(for Android).
These are the same byte array, but it's not drawn correctly in this case.
Success:
// ex) unsigned char *p = yuv.y, yuv.u or yuv.v;
for(int dy = 0; dy < hh; dy++){
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, dy, ww, 1, GL_LUMINANCE, GL_UNSIGNED_BYTE, p);
p += ww;
}
Fail :
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, ww, hh, GL_LUMINANCE, GL_UNSIGNED_BYTE, p);
Because I'm not knowledgeable about OpenGL, I don't understand this reason.
I think that if glTexSubImage2D is called for each line, performance will get worse. Isn't it improvable any more?
My best guess is that the data you are passing to glTexSubImage2D is not correctly aligned.
From the glTexSubImage2D Reference page for OpenGL ES 2.0:
Storage parameter GL_UNPACK_ALIGNMENT, set by glPixelStorei, affects the way that data is read out of client memory. See glPixelStorei for a description.
Passing a single line at a time from your data probably hides the fact that each line is not correctly aligned, and therefore the call succeeds.

OpenGL es 2.0 glDrawElements index pointer error

Im having trouble with texturing a cube with different textures per face. I can draw the cube with one texture on all the faces, but when I try use multiple textures it fails. The way im trying to do it is like so:
//my indexing array located in a header file
#define NUM_IMAGE_OBJECT_INDEX 36
static const unsigned short cubeIndices[NUM_IMAGE_OBJECT_INDEX] =
{
0, 1, 2, 2, 3, 0, // front
4, 5, 6, 6, 7, 4, // right
8, 9,10, 10,11, 8, // top
12,13,14, 14,15,12, // left
16,17,18, 18,19,16, // bottom
20,21,22, 22,23,20 // back
};
now in my rendering function, this currently works for drawing the cube with a single texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, iconTextureID);
glDrawElements(GL_TRIANGLES, NUM_IMAGE_OBJECT_INDEX, GL_UNSIGNED_SHORT, 0);
this does not work
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, iconTextureID);
glDrawElements(GL_TRIANGLES, NUM_IMAGE_OBJECT_INDEX, GL_UNSIGNED_SHORT, (const GLvoid*)&cubeIndices[0]);
which should equate to the same thing, from looking at some other examples. Ultimately I would like to be doing this something like this:
for(int i = 0; i < 6; i++){
iconTextureID = textureID[i];
glBindTexture(GL_TEXTURE_2D, iconTextureID);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (const GLvoid*)&cubeIndices[i*6]); //index 0-5 use texture 1, 6-11 use texture 2, etc
}
does anyone know what could be wrong with this indexing? ive basically copy pasted this code from an android project (which works), currently trying to do this on ios.
In OpenGL ES 2.0, index data can come from either buffer objects or pointers to client memory. Your code is obviously using a buffer object. Though you don't show the creation of this buffer object, where you upload your client array of pointers, or where you call glBindBuffer(GL_ELEMENT_ARRAY_BUFFER) before rendering with it. It must be there or your code would have crashed. When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, OpenGL expects the "pointer" given to glDrawElements to be a byte offset into the buffer object, not a client-memory pointer.
This is why copy-and-paste coding is a bad idea. Where you copied from was probably using client memory; you are not.
If you want your looping code to work, you need to do the pointer arithmetic yourself:
for(int i = 0; i < 6; i++)
{
iconTextureID = textureID[i];
glBindTexture(GL_TEXTURE_2D, iconTextureID);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, reinterpret_cast<void*>(i * 6 * sizeof(GLushort)));
}

How to draw/render a Bullet Physics collision body/shape?

I have implemented the Bullet Physics engine into my android program with the NDK (I am using Vuforia's imagetarget example for android), and it is set up and working correctly, however I would like to render/draw my collision boxes/planes to see my rigid bodies (btRigidBody)/collision shapes (btCollisionShape), I'm positive this is possible but I can't find any tutorials on how to do it!
I have taken the hello world Bullet physics tutorial on their wiki page and modified it to apply the transformations from the falling physics body to a 3d object I have in opengl es 2.0 to view the collision bodies, here is the code I am using to render to object:
void drawRigidBody(btRigidBody* body,QCAR::Matrix44F modelViewMatrix, unsigned int textureID)
{
btTransform trans;
body->getMotionState()->getWorldTransform(trans);
LOG("sphere pos: (x %f , y %f, z %f)",trans.getOrigin().getX(),trans.getOrigin().getY(),trans.getOrigin().getZ());
float physicsMatrix[16];
trans.getOpenGLMatrix(physicsMatrix);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&modelViewMatrix.data[0]);
QCAR::Matrix44F modelViewProjection, objectMatrix;
SampleUtils::multiplyMatrix(&modelViewMatrix.data[0], physicsMatrix, &objectMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &objectMatrix.data[0], &modelViewProjection.data[0]);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signVerts[0]);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signNormals[0]);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signTexCoords[0]);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
(GLfloat*)&modelViewProjection.data[0] );
glDrawArrays(GL_TRIANGLES, 0, signNumVerts);
}
EDIT: looking at the code for btBoxShape i noticed you can grab the box vertices and normals:
btVector3** vertices= wallShape->getVertices();
btVector3**normals = wallShape->getNormals();
but you can't grab a list of indices to draw the vertex points in a certain order!
If I recall correctly, this is not the proper way to draw debug shapes in Bullet. Did you read the user manual (PDF), page 16?
You are supposed to implement your own debug drawer class which implements btIDebugDraw, and in this class you implement the drawLine method.
You pass this debug drawer to bullet with setDebugDrawer, and then enable it with world->getDebugDrawer->setDebugMode(debugMode);
To draw the world, call world->debugDrawWorld();
This then calls drawLine on your custom function numerous times until a wireframe model of the physics world has been drawn.

glReadPixels get wrong data in android opengl es 2.0

I get the yuv data from camera , and send them to opengl, then I use fragment shader to convert the data to RGBA format and show it on the screen. Everything goes well but when I use glReadPixels to get the RGBA data from framebuffer to int array, I get wrong data.
// I use VBO to draw
glBindBuffer(GL_ARRAY_BUFFER, squareVerticesBufferID);
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(gvPositionHandle);
glBindBuffer(GL_ARRAY_BUFFER, textureVerticesBuferID);
glVertexAttribPointer(gvTextureHandle, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(gvTextureHandle);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, squareVerticesIndexBufferID);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
// Then I use glReadPixels to read the RGBA data
unsigned char *returnDataPointer = (unsigned char*) malloc(width * height * 4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, returnDataPointer);
Unfortunately I get wrong data, the last thousands elements in the array are 0s,the same code works well on ios, did I miss something?
I work on Android 4.0.3 and use OpenGL ES 2 from the NDK.

Categories

Resources