I am developing an OpenGL 2.0 live wallpaper for android and the rendering is laggy (~11fps). In no means am I an expert, but i couldn't find any online resources to answer my question. Benchmarking, I found out that 'onDrawFrame(GL10 glUnused)' stalls because of how many objects I use. I get an OK framerate if i restrict it to rendering ~100 objects, but my wallpaper consists of a raster of ~1000 triangles. Is this the problem? Here is the code:
public void onDrawFrame(GL10 glUnused) {
if (para.xOffset > f) {
Matrix.translateM(mViewMatrix, 0, (para.xOffset - f) * 10, 0, 0);
f = para.xOffset;
} else {
Matrix.translateM(mViewMatrix, 0, (para.xOffset - f) * 10, 0, 0);
f = para.xOffset;
}
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
GLES20.glUseProgram(mPerVertexProgramHandle);
mMVPMatrixHandle = GLES20.glGetUniformLocation(mPerVertexProgramHandle,
"u_MVPMatrix");
mMVMatrixHandle = GLES20.glGetUniformLocation(mPerVertexProgramHandle,
"u_MVMatrix");
mLightPosHandle = GLES20.glGetUniformLocation(mPerVertexProgramHandle,
"u_LightPos");
mPositionHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle,
"a_Position");
mColorHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle,
"a_Color");
mNormalHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle,
"a_Normal");
Matrix.setIdentityM(mLightModelMatrix, 0);
Matrix.translateM(mLightModelMatrix, 0, 0.0f, 0.0f, -5.0f);
Matrix.translateM(mLightModelMatrix, 0, 0.0f, 0.0f, 2.0f);
Matrix.multiplyMV(mLightPosInWorldSpace, 0, mLightModelMatrix, 0,
mLightPosInModelSpace, 0);
for (newHexagon nha[] : h.hfgrd) {
for (newHexagon nh : nha) {
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, nh.PosX, nh.PosY, -5);
final FloatBuffer colors;
colors = ByteBuffer
.allocateDirect(nh.colors.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
colors.put(nh.colors).position(0);
drawthis(colors);
}
}
GLES20.glUseProgram(mPointProgramHandle);
h.draw(0, 0);
}
edit: Additional Information: the triangles are unisize and arranged in a flat hexagonal tiling of the screen.
CPU: Dual-core 1.2 GHz Scorpion
GPU: Adreno 220
Device: HTC Sensation
Android v4.0.3
There are at least 3 major problems in your code:
Don't call glGetUniformLocation on each frame - assign them to fields once after shader compilation. Getting uniforms/attributes is a rather expensive operation.
You may generate too much draw calls (glDrawElements or glDrawArrays). Your drawthis() method supposedly issues at least one draw call. It is OK for modern hardware to handle max 200-500 draw calls. Each draw call has a small overhead of driver-to-GPU communication so the best practice is to batch multiple draw calls as much as possible. But please provide code of drawthis() for better understanding of problem.
ByteBuffer.allocateDirect() on each draw call. You have to avoid any memory allocations, and you are doing it in loop.
Related
sorry for my english
I want to draw textures on clean C, no objective c!
that it is necessary to write a library for ios / android
I draw
- (BOOL)createFramebuffer{
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];//a string
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
return YES;
}
- (void)drawView {
[EAGLContext setCurrentContext:context];//a string
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, spriteTexture);
Byte *byteData = (Byte*)malloc(3686400);
memcpy(byteData, [texData bytes]+kon, 3686400);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 1280, 720, GL_RGBA, GL_UNSIGNED_BYTE, byteData);
free(byteData);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];//a string
}
how to get rid of these lines, which are tied to objective c?
1 [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
2 [EAGLContext setCurrentContext:context];
3 [context presentRenderbuffer:GL_RENDERBUFFER_OES];</code>
You can't, and still have this render to the screen in iOS. EAGLContext is an Objective-C object that is used to manage your OpenGL ES contexts, and you need to interact with it in order to render and display your scene.
However, you can wrap accesses to this in a function that changes its contents depending on what platform you are targeting. Compiler conditionals can help you do this.
I started writing a game for Android using OpenGL-ES and just finished the draw code which uses the glDrawTexfOES extension. When I tested it on the emulator it works fine but testing it on my Samsung Galaxy S2 it seems all the textures are drawn white.
To make sure I didn't make any mistakes I copied the source from a tutorial and ran it with the same results. The tutorial code I am using can be seen here.
My textures are .PNG format and power of two and I am loading from the R.drawable folder although I have tried some other locations such as drawable-nodpi as I have seen suggested.
I have also checked the result of glGenTextures which I have read can give odd values for certain phones but seems to be giving the correct values (1,2,3..).
Does anybody know why this could be happening or suggest some other checks I can do to figure out what is going wrong?
Here is a slightly modified version of the example code I linked above to keep things simple.
public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
gl10.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_FASTEST);
// Set the background colour to black ( rgba ).
gl10.glClearColor(0.0f, 0.0f, 0.0f, 1);
// Enable Flat Shading.
gl10.glShadeModel(GL10.GL_FLAT);
// We don't need to worry about depth testing!
gl10.glDisable(GL10.GL_DEPTH_TEST);
// Set OpenGL to optimise for 2D Textures
gl10.glEnable(GL10.GL_TEXTURE_2D);
// Disable 3D specific features.
gl10.glDisable(GL10.GL_DITHER);
gl10.glDisable(GL10.GL_LIGHTING);
gl10.glTexEnvx(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_MODULATE);
// Initial clear of the screen.
gl10.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Test for draw texture
// Test for device specific extensions
String extensions = gl10.glGetString(GL10.GL_EXTENSIONS);
boolean drawTexture = extensions.contains("draw_texture");
Log.i("OpenGL Support - ver.:",
gl10.glGetString(GL10.GL_VERSION) + " renderer:" +
gl10.glGetString(GL10.GL_RENDERER) + " : " +
(drawTexture ? "good to go!" : "forget it!!"));
// LOAD TEXTURE
mTextureName = new int[1];
// Generate Texture ID
gl10.glGenTextures(1, mTextureName, 0);
assert gl10.glGetError() == GL10.GL_NO_ERROR;
// Bind texture id / target (we want 2D of course)
gl10.glBindTexture(GL10.GL_TEXTURE_2D, mTextureName[0]);
// Open and input stream and read the image
InputStream is = mContext.getResources().openRawResource(R.drawable.asteroid);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch (IOException e) {
e.printStackTrace();
}
}
// Build our crop region to be the size of the bitmap (ie full image)
mCrop = new int[4];
mCrop[0] = 0;
mCrop[1] = imageHeight = bitmap.getHeight();
mCrop[2] = imageWidth = bitmap.getWidth();
mCrop[3] = -bitmap.getHeight();
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
assert gl10.glGetError() == GL10.GL_NO_ERROR;
bitmap.recycle();
}
public void onSurfaceChanged(GL10 gl10, int i, int i1) {
gl10.glViewport(0, 0, i, i1);
/*
* Set our projection matrix. This doesn't have to be done each time we
* draw, but usually a new projection needs to be set when the viewport
* is resized.
*/
float ratio = (float) i / i1;
gl10.glMatrixMode(GL10.GL_PROJECTION);
gl10.glLoadIdentity();
gl10.glFrustumf(-ratio, ratio, -1, 1, 1, 10);
}
public void onDrawFrame(GL10 gl) {
// Just clear the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Begin drawing
//--------------
// These function calls can be experimented with for various effects such as transparency
// although certain functionality maybe device specific.
gl.glShadeModel(GL10.GL_FLAT);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glColor4x(0x10000, 0x10000, 0x10000, 0x10000);
// Setup correct projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrthof(0.0f, mWidth, 0.0f, mHeight, 0.0f, 1.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glEnable(GL10.GL_TEXTURE_2D);
// Draw all Textures
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureName[0]);
((GL11)gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, mCrop, 0);
((GL11Ext)gl).glDrawTexfOES(0, 0, 0, imageWidth, imageHeight);
// Finish drawing
gl.glDisable(GL10.GL_BLEND);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glPopMatrix();
}
Have you tried to have all the images on drawable-nodpi and nowhere else?
I don't think it may be important but try these lines just before attaching the rendering.
glSurfaceView.setZOrderOnTop(true);
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
glSurfaceView.getHolder().setFormat(PixelFormat.RGBA_8888);
The error can be related with transparency and PNG format.
If it doesn't work, could you please paste the code related with GLSurfaceView and the Renderer?
Thanks!
I am examining an interesting problem I'm facing with OpenGL lighting on Android. I'm working on a 3D Viewer where you can add and manipulate 3D objects. You can also set a light with different attributes. The problem I was facing with my Viewer was that the highlight on the 3D objects from the light (it is a point light) behaved strangely. If the light source was in the exact same point as the camera, the highlight would move in the opposite direction you would expect. (So if you move the object to the left, the highlight moves to the leftedge of the object as well, instead of the right, which is what I was expecting.)
So to further narrow the problem down I've created a small sample application that only renders a square and then I rotate that square around the camera position (the origin), which is also where the light is placed. This should result in all squares facing the camera directly, so that they would be completely highlighted. The result though looked like that:
Can it be that these artifacts appear because of the distortion you get on the border due to the projection?
In the first image the distance between the sphere and the camera is about 20 units and the size of the sphere is about 2. If I move the light closer to the object the highlight looks a lot better, in the way I'm expecting it.
In the second image the radius in which the squares are located is 25 units.
I'm using OpenGL ES 1.1 (since I was struggling to get it to work with shaders in ES 2.0) on Android 3.1
Here is some of the code I'm using:
public void onDrawFrame(GL10 gl) {
// Setting the camera
GLU.gluLookAt(gl, 0, 0, 0, 0f, 0f, -1f, 0f, 1.0f, 0.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
for (int i = 0; i < 72; i++) {
gl.glPushMatrix();
gl.glRotatef(5f * i, 0, 1, 0);
gl.glTranslatef(0, 0, -25);
draw(gl);
gl.glPopMatrix();
}
}
public void draw(GL10 gl) {
setMaterial(gl);
gl.glEnable(GL10.GL_NORMALIZE);
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glFrontFace(GL10.GL_CCW);
// Enable the vertex and normal state
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES, mIndexBuffer.capacity(), GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_NORMAL_ARRAY);
}
// Setting the light
private void drawLights(GL10 gl) {
// Point Light
float[] position = { 0, 0, 0, 1 };
float[] diffuse = { .6f, .6f, .6f, 1f };
float[] specular = { 1, 1, 1, 1 };
float[] ambient = { .2f, .2f, .2f, 1 };
gl.glEnable(GL10.GL_LIGHTING);
gl.glEnable(GL10.GL_LIGHT0);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glLightfv(GL10.GL_LIGHT0, GL_POSITION, position, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_DIFFUSE, diffuse, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_AMBIENT, ambient, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_SPECULAR, specular, 0);
}
private void setMaterial(GL10 gl) {
float shininess = 30;
float[] ambient = { 0, 0, .3f, 1 };
float[] diffuse = { 0, 0, .7f, 1 };
float[] specular = { 1, 1, 1, 1 };
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse, 0);
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, ambient, 0);
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular, 0);
gl.glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, shininess);
}
I'm setting the light in the beginning, when the activity is started (in onSurfaceCreated) and the material everytime I draw a square.
The effect in your second example (with the squares) is rather due to the default non-local viewer that OpenGL uses. By default the eye-space view vector (the vector from vertex to camera, used for the specular highlight computation) is just taken to be the (0, 0, 1)-vector, instead of the normalized vertex position. This approximation is only correct if the vertex is in the middle of the screen, but gets more and more incorrect the farther you move to the boundary of the srceen.
To change this and let OpenGL use the real vector from the vertex to the camera, just use the glLightModel function, especially
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
I'm not sure if this is also the cause for your first problem (with the sphere), but maybe, just try it.
EDIT: It seems you cannot use GL_LIGHT_MODEL_LOCAL_VIEWER in OpenGL ES. In this case there is no way around this problem, except switching to OpenGL ES 2.0 and doing all lighting computations yourself, of course.
Your light is probably moving when you're moving your object.
Take a look at this answer http://www.opengl.org/resources/faq/technical/lights.htm#ligh0050
I'm working on my very first openGL game, inspired by the game "Greed Corp" on the playstation network. It's a turn based strategy game that is based on a hex grid. Each hexagon tile has it's own height and texture.
I'm currently drawing a hexagon based on some examples and tutorials I've read. Here's my hextile class:
public class HexTile
{
private float height;
private int[] textures = new int[1];
private float vertices[] = { 0.0f, 0.0f, 0.0f, //center
0.0f, 1.0f, 0.0f, // top
-1.0f, 0.5f, 0.0f, // left top
-1.0f, -0.5f, 0.0f, // left bottom
0.0f, -1.0f, 0.0f, // bottom
1.0f, -0.5f, 0.0f, // right bottom
1.0f, 0.5f, 0.0f, // right top
};
private short[] indices = { 0, 1, 2, 3, 4, 5, 6, 1};
//private float texture[] = { };
private FloatBuffer vertexBuffer;
private ShortBuffer indexBuffer;
//private FloatBuffer textureBuffer;
public HexTile()
{
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
vertexBuffer = vbb.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
indexBuffer = ibb.asShortBuffer();
indexBuffer.put(indices);
indexBuffer.position(0);
/*ByteBuffer tbb = ByteBuffer.allocateDirect(texture.length * 4);
tbb.order(ByteOrder.nativeOrder());
textureBuffer = tbb.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);*/
}
public void setHeight(float h)
{
height = h;
}
public float getHeight()
{
return height;
}
public void draw(GL10 gl)
{
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
//gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GL10.GL_TRIANGLE_FAN, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer);
}
public void loadGLTexture(GL10 gl, Context context)
{
textures[0] = -1;
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.hex);
while(textures[0] <= 0)
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
}
Every frame I'm looping through all visible tiles to draw them, that would be 11 * 9 tiles MAX. This however drops my framerate to 38, and that is without even drawing textures on them, just the flat hexagons.
Now I'm trying to figure out how to increase performance. I figured drawing the whole grid at once could be faster, but I have no idea how to do that, since each tile can have a different height, and will most likely have a different texture than a neighboring tile.
I'd really appreciate some help on this, because I'd like to get started on the actual game ^.^
Assuming your hex grid is static you can just spin over all your hexagons once, generate their geometry, and append everything to one (or more, if you have more than 2^16 vertices) large VBO that you can draw in one go.
As far as textures go you may be able to use a texture atlas.
I'm currently learning OpenGL as well, I've produced one Android OpenGL application called 'Cloud Stream' where I found similar issues with performance.
As a general answer to performance concerns, there are a few things which can help. The Vertex pipeline for Graphics at a hardware level apparently is more efficient when passed larger amounts of vertices at once. Calling glVertexPointer for each Hex tile is not as efficient as calling it once with the vertices of all tiles.
This makes things harder to code as you essentially draw all your tiles at once, but it does seem to speed things up a bit. In my application, all of the clouds are drawn in the one call.
Other avenues to try would be to save the Vertice positions in a VBO which I found to be quite tricky, at least it was when trying to cater for 2.1 users. These days things might be easier, I'm not sure. With that the idea is to save the Vertice array for your tile into Video Memory and you get back a pointer like you do with your Textures. As you can imagine, not sending your Vertice Array Buffer up each frame speeds things up a little for each tile draw. Even if things aren't static its a good approach as I doubt things are changing for each frame.
Another suggestion I came across online was to use Short instead of Float for your Vertices. And then to change the scale of your finished rendering to get your desired size. This would lower the size of your vertices and speed things up a little... not a huge amount but still worth trying I would say.
One last thing I would like to add, if you end up using any Transparency... be aware that you must paint back to front for it to work which has a performance impact as well. If you draw front to back, the Rendering engine automatically knows not to draw when coordinates are not visible... drawing back to front means everything is drawn. Keep this in mind and always try to draw front to back when possible.
Good luck with your game, I'd love to know how you got on... I'm just starting my game and I'm quite excited to start. If you haven't already come across this... I think it's worth a read. http://www.codeproject.com/KB/graphics/hexagonal_part1.aspx
The reason I'm asking this is that our app (The Elements) runs fine on a Droid and a Nexus One, our two test phones, but not correctly on our recently acquired Atrix 4G. What draws is a skewed version of what should draw, with all the colors being replaced with alternating lines of cyan, magenta, and yellow (approximately), which leads us to believe that one of the primary colors for the sand particles that should show up is missing based on which line it's on. I'm sorry for the unclear description, we had images but since this account doesn't have 10 reputation we couldn't post them.
Here is the code of our gl.c file, which does the texturing and rendering:
/*
* gl.c
* --------------------------
* Defines the gl rendering and initialization
* functions appInit, appDeinit, and appRender.
*/
#include "gl.h"
#include <android/log.h>
unsigned int textureID;
float vertices[] =
{0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f};
float texture[] =
{0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f};
unsigned char indices[] =
{0, 1, 3, 0, 3, 2};
int texWidth = 1, texHeight = 1;
void glInit()
{
//Set some properties
glShadeModel(GL_FLAT);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
//Generate the new texture
glGenTextures(1, &textureID);
//Bind the texture
glBindTexture(GL_TEXTURE_2D, textureID);
//Enable 2D texturing
glEnable(GL_TEXTURE_2D);
//Disable depth testing
glDisable(GL_DEPTH_TEST);
//Enable the vertex and coord arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Set tex params
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//Set up texWidth and texHeight texHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, emptyPixels);
//Free the dummy array
free(emptyPixels);
//Set the pointers
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texture);
}
void glRender()
{
//Check for changes in screen dimensions or work dimensions and handle them
if(dimensionsChanged)
{
vertices[2] = (float) screenWidth;
vertices[5] = (float) screenHeight;
vertices[6] = (float) screenWidth;
vertices[7] = (float) screenHeight;
texture[2] = (float) workWidth/texWidth;
texture[5] = (float) workHeight/texHeight;
texture[6] = (float) workWidth/texWidth;
texture[7] = (float) workHeight/texHeight;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if (!flipped)
{
glOrthof(0, screenWidth, screenHeight, 0, -1, 1); //--Device
}
else
{
glOrthof(0, screenWidth, 0, -screenHeight, -1, 1); //--Emulator
}
dimensionsChanged = FALSE;
zoomChanged = FALSE;
}
else if(zoomChanged)
{
texture[2] = (float) workWidth/texWidth;
texture[5] = (float) workHeight/texHeight;
texture[6] = (float) workWidth/texWidth;
texture[7] = (float) workHeight/texHeight;
zoomChanged = FALSE;
}
//__android_log_write(ANDROID_LOG_INFO, "TheElements", "updateview begin");
UpdateView();
//__android_log_write(ANDROID_LOG_INFO, "TheElements", "updateview end");
//Clear the screen
glClear(GL_COLOR_BUFFER_BIT);
//Sub the work portion of the tex
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, workWidth, workHeight, GL_RGB, GL_UNSIGNED_BYTE, colors);
//Actually draw the rectangle with the text on it
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
}
Any ideas as to what the difference is between the Atrix 4G and other phones in terms of OpenGL or why our app is doing what it is in general are much appreciated! Thanks in advance.
Here is an example of what it looks like: http://imgur.com/Oyw64
I know it's a bit late to reply, but the real reason you're seeing the mixed colors is due to OpenGL's scanline packing alignment -- not due to any driver bug or power-of-two sized texture issue.
Your glTexSubImage2D call is sending data in GL_RGB format, so I'm guessing your colors buffer is 3 bytes per pixel. Odds are the Droid and Nexus One phones have a default pack alignment of 1, but the Tegra 2 defaults to an alignment of 4. This means your 3 byte array can become misaligned with what the driver expects after every scanline, and a byte or two will be skipped for the next scanline, resulting in the colors you see. The reason why this works with a power-of-two sized texture is because your buffer just happens to be aligned properly for the next scanline. Basically this is the same issue as loading BMPs, where each scanline has to be padded to 4 bytes, regardless of the bit depth of the image.
You can explicitly disable any alignment packing by calling glPixelStorei(GL_PACK_ALIGNMENT, 1);. Note that changing this only affects the way OpenGL interprets your texture data, so there is no rendering performance penalty for changing this value. When the texture is sent to the graphics subsystem, the scanlines are stored in whatever format is optimal for the hardware, but the driver still has to know how to unpack your data properly. However, since you are changing the texture data every frame, instead of the driver being able to do one memcpy() to upload the entire texture, it will have to do TextureHeight *memcpy()s in order to upload it. This is not likely to be a major bottleneck, but if you are looking for the best performance, you may want to query the driver's default pack alignment on startup using glGetIntegerv(GL_PACK_ALIGNMENT, &align); and adjust your buffer accordingly at runtime.
Here's the specification on glPixelStore() for reference.
Ok, we finally found the actual problem. It turns out that glSubTexImage2D() actually requires the WIDTH to be a power of two, but not the height, for some GPUs including the Tegra 2. We though that it was only the texture that needed to be a power of two and that's where we were wrong. We're going to have to do a bit of recoding, but hopefully this will work out in the end (AT LAST!!).
The Atrix 4G is the first prominent phone that uses Nvidia's Tegra GPU. As such, it has an entirely different OpenGL implementation than previous Android devices. Either you are observing a bug Tegra hardware+software combination, or your application was relying on undefined behavior and you were getting lucky on other devices.
You may want to file a bug report with Nvidia.