Related
I'm trying to create a simple game in android. By road I meant games like Temple Run or Subway Surf but much simpler and abstract so I could do it only with the OpenGL ES without any other libraries.
So I've read a lot of basic tutorials that explains the 3D construction logic and used the basic sample of creating a 3D cube that rotates.
I am now trying to use that sample to create the game road. I made the square to look more like a rectangle and duplicate it to a 30x5 square road. I've tried many combinations and the internet to find a solution and yet I have this problems\questions:
How do I set all 30x5 squares to be one next to another? I'm always
getting the squares with some unwanted gap
I want to set the vieweye point (the "camera") 45 degrees to the
middle of the first row, so the player could see the road upon him
Next, I would want to move along the road. So Iv'e seen the rotate
and how it works. Is there a way to do the same to the viewpoint or
do I need to change the squares drawing Z's?
I see that onDrawFrame() is calling over and over many times. To
control the FPS, I've seen on the internet that people have used
there own FPS calculation with a sleep(). Isn't there a built one
already?
GLRenderer code:
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.opengl.GLSurfaceView;
import android.opengl.GLU;
import android.util.Log;
class GLRenderer implements GLSurfaceView.Renderer {
private static final String TAG = "GLRenderer" ;
private final Context context;
private float mCubeRotation = 70.0f;
private Triangle triangle;
private Cube[][] cube;
GLRenderer(Context context) {
this.context = context;
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
gl.glClearDepthf(1.0f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT,GL10.GL_NICEST);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.d("MyOpenGLRenderer", "Surface changed. Width=" + width
+ " Height=" + height);
System.out.println("arg");
//get map
cube = new Cube[30][5];
for(int i = 0; i < cube.length; i++)
for(int j = 0; j < cube[i].length; j++)
cube[i][j] = new Cube();
//draw triangle
triangle = new Triangle(0.5f, 1, 0, 0);
// Define the view frustum
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
float ratio = (float) width / height;
GLU.gluPerspective(gl, 45.0f, ratio, 0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
public void onDrawFrame(GL10 gl) {
// Clear the screen to black
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
//translate(dx, dy, dz)
// Position model so we can see it
//gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0.0f, 0.0f, -10.0f);
gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f);
gl.glTranslatef(0.0f, 0.0f, -10.0f);
cube[0][0].draw(gl);
gl.glTranslatef(0.0f, 0.0f, -10.0f);
cube[0][1].draw(gl);
gl.glTranslatef(0.0f, 0.0f, -10.0f);
cube[0][2].draw(gl);
gl.glLoadIdentity();
//set rotation
mCubeRotation -= 0.15f;
System.out.println("mCubeRotation: "+mCubeRotation);
}
}
Cube code:
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.opengles.GL10;
class Cube {
private FloatBuffer mVertexBuffer; //vertex
private FloatBuffer mColorBuffer; //color
private ByteBuffer mIndexBuffer; //face indices
float width = 1.0f;
float height = 0.5f;
float depth = 1.0f;
private float vertices[] = {
-width, -height, -depth, // 0
width, -height, -depth, // 1
width, height, -depth, // 2
-width, height, -depth, // 3
-width, -height, depth, // 4
width, -height, depth, // 5
width, height, depth, // 6
-width, height, depth, // 7
};
private float colors[] = {
0.0f, 1.0f, 0.0f,
1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 1.0f,
0.5f, 0.0f, 1.0f,
1.0f, 0.5f, 0.0f,
1.0f, 1.0f, 0.0f,
0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
1.0f, 1.0f
};
private byte indices[] = {
0, 4, 5,
0, 5, 1,
1, 5, 6,
1, 6, 2,
2, 6, 7,
2, 7, 3,
3, 7, 4,
3, 4, 0,
4, 7, 6,
4, 6, 5,
3, 0, 1,
3, 1, 2
};
public Cube() {
ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mVertexBuffer = byteBuf.asFloatBuffer();
mVertexBuffer.put(vertices);
mVertexBuffer.position(0);
byteBuf = ByteBuffer.allocateDirect(colors.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mColorBuffer = byteBuf.asFloatBuffer();
mColorBuffer.put(colors);
mColorBuffer.position(0);
mIndexBuffer = ByteBuffer.allocateDirect(indices.length);
mIndexBuffer.put(indices);
mIndexBuffer.position(0);
}
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE,
mIndexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
}
}
Eventually I'll draw the square array using glDrawArrays() or glDrawElements() but for now I've used only 3 objects.
There's a lot of questions here. I can't cover everything in detail, but hopefully I can give you some pointers to steer you in the right direction.
To draw 150 squares, you have a number of options:
Create a vertex buffer with a single square, and draw it 150 times, with translations applied. This is probably the easiest to get you off the ground, so I would recommend getting it working first. It's a reasonable approach if all your squares look the same.
Create 150 vertex buffers, with different coordinates. I wouldn't recommend it because it's the least efficient, and doesn't have any benefits over other approaches.
Store the vertices for all 150 squares in a single vertex buffer. This will be the most efficient of the first 3 options, but only works well as long as the relative orientation of the squares remains the same. You may want to try this once you have the basics working.
Use instanced rendering. This is a more advanced feature, and only available in ES 3.0. Just mentioning it for future reference.
What you attempted is sort of a hybrid between option 1 and 2. If you want to go for option 1, you only need one instance of your Cube class. If you look at what you did, this makes sense. You created 150 objects that are all exactly the same, which is not very useful.
Now, on your questions:
To draw the squares without gaps between them, the amount of your translations needs to be the same as the size of each square. Your squares are 2 units wide, but you translate each one by 10 units. You also translate them in the z-direction, which I don't quite understand.
If you want to stick with the kind of functionality you have been using, check out GLU.gluLookAt(). It allows you to place your camera where you want it, and point it in any direction.
Same as 2. Call GLU.gluLookAt() every time you want to move the viewpoint.
Android caps the frame rate at 60 frames per second. That's normally what you should be shooting for anyway, IMHO. If you want to limit it to 30 fps later to save power, I think you can cross that bridge when you get there. Based on what I researched recently, there's no clean and portable way to do this on Android. The proposed solutions I have seen all look kind of hacky to me.
A couple more things on your code:
Your color definitions look odd. You specify colors in 4 components, and the size of the array is correct for that. But you write the array with 3 values per line, which makes it look like you want 3 component colors. Either one can be done, but you need to make sure that you're consistent. 3 components are enough, unless you need transparency.
You are using ES 1.0. That's valid, and might be easier to get started with. But you should be aware that many of its features are considered obsolete, and using ES 2.0 would let you learn more modern and current OpenGL features. The initial hurdle will be higher, so there's definitely a tradeoff here.
Here is my problem:
I have a GLSurfaceView with Renderer and stuff. Everything works just as I wanted, on older Android versions. But on newer versions (I guess > 4.X) it just shows a black screen without any Bitmaps. For example if I use gl.glClearColor(0.1f, 0.2f, 0.3f, 0.5f); in my onSurfaceCreated method, it changed from black to the color. So I think the problem must be the camera looking in the wrong direction or something, because the background-color is drawn.
Since I am pretty new to OpenGL, I wanted to ask if there are any connections between Android versions and the OpenGL-camera or something like that?
Many people say my Bitmap-Sizes have to be powers of 2, but it doesnt solve anything.
Here is my Renderer:
public class GlRenderer implements Renderer {
#Override
public void onDrawFrame(GL10 gl) {
// clear Screen Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
// Reset the Modelview Matrix
gl.glLoadIdentity();
gl.glTranslatef(0.0f, 0.0f, -5.0f); // move 5 units INTO the screen
// is the same as moving the camera 5 units away
updateLogic(gl);
drawEverything(gl);
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION); // or some matrix uniform if using shaders
gl.glLoadIdentity();
gl.glOrthof(0, width, height, 0, -1, 1); // this will allow to pass vertices in 'canvas pixel' coordinates
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glDisable(GL10.GL_DITHER);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_FASTEST);
gl.glEnable(GL10.GL_TEXTURE_2D); //Enable Texture Mapping ( NEW )
gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); //Set Background
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
}
}
Not sure if this is the issue you are having, but try changing the following line:
gl.Orthof(0, widht, height, 0, -1, 1);
to
gl.Orthof(0, widht, height, 0, 1, -1);
Notice that the near/far values are inverted. See this for a description of this maddness :)
I'm new to Android NDK and Native Activity, I liked to create a triangle in the middle of the screen but no matter how I tried I wouldn't show up!
Here is my initialize method:
void Engine::initialize() {
LOGI("Engine::initialize fired!");
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
EGLint w, h, dummy, format;
EGLint numConfigs;
EGLConfig config;
EGLSurface surface;
EGLContext context;
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, 0, 0);
eglChooseConfig(display, attribs, &config, 1, &numConfigs);
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(this->app->window, 0, 0, format);
surface = eglCreateWindowSurface(display, config, this->app->window, NULL);
context = eglCreateContext(display, config, NULL, NULL);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
LOGW("Unable to eglMakeCurrent");
return;
}
eglQuerySurface(display, surface, EGL_WIDTH, &w);
eglQuerySurface(display, surface, EGL_HEIGHT, &h);
this->display = display;
this->context = context;
this->surface = surface;
this->width = w;
this->height = h;
// Initialize GL state.
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
glEnable(GL_CULL_FACE);
glShadeModel(GL_SMOOTH);
glDisable(GL_DEPTH_TEST);
this->animating = true;
}
And here is my render method:
void Engine::onRender() {
glClearColor(0.7, 0.1, 0.5, 1);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, this->width, this->height);
//glMatrixMode(GL_PROJECTION);
//glLoadIdentity();
//glFrustumf(-this->width / 2, this->width / 2, -this->height / 2, this->height / 2, 1, 3);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, 0);
GLfloat triangle[] = {
0, 0, 0,
0, 100, 0,
100, -100, 0
};
glPushMatrix();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTranslatef(0, 0, 0);
glColor4f(1.0f, 0.3f, 0.0f, .5f);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, triangle);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -10);
eglSwapBuffers(this->display, this->surface);
}
Anyone can help?
All I can see the pink/purple background but know any other pixel :| No errors in console.
You have probably OpenGL errors, but they are not logged automatically. You can use this function for that:
static void checkGlError(const char* op) {
for (GLint error = glGetError(); error; error = glGetError()) {
LOGI("after %s() glError (0x%x)\n", op, error);
}
}
and call it after each OpenGL call. Then you will see exactly where it crashes.
A part from that, I would recommend you using OpenGL ES 2.0. I'm not sure right now if all the calls you are using work with ES 1.1 (maybe someone else can confirm).
In addition, there is an NDK sample implementing exactly the same than you, but using ES 2.0 instead. You can find it here:
http://code.google.com/p/android-cmake/source/browse/samples/hello-gl2/jni/gl_code.cpp?r=787b14cf9ed13299cb4c729d9a67d06e300fd52e
It uses a simple shader to paint the triangle and renders it using a VBO.
I was having the same problem for many hours today, and finally found an answer. It is not an error on your code, but most likely on the testing device.
First: Are you working with an Android Virtual Device or with a physical mobile?
With the first case, you need to use an ADV min API-15, and add gpu-emulation set to yes. Or from the command line, you can use this line when you run your ADV:
emulator -avd <avd_name> -gpu on
If it is ok, you will find these lines in the logcat:
D/libEGL ( 595): loaded /system/lib/egl/libGLES_android.so
D/libEGL ( 595): loaded /system/lib/egl/libEGL_emulation.so
D/libEGL ( 595): loaded /system/lib/egl/libGLESv1_CM_emulation.so
D/libEGL ( 595): loaded /system/lib/egl/libGLESv2_emulation.so
Else, you might find only the first, and an error like "egl.cnf not found, falling back to default" (Found the helping at: https://developer.amazon.com/sdk/fire/enable-features.html#GPU).
Now, if you are using a physical mobile, I just read that some mobiles seem to not support egl, mainly some with CyanogenMod (They display a similar error in the logcat). In this case, you should test it on other phone, or an AVD with the specifications above.
I started writing a game for Android using OpenGL-ES and just finished the draw code which uses the glDrawTexfOES extension. When I tested it on the emulator it works fine but testing it on my Samsung Galaxy S2 it seems all the textures are drawn white.
To make sure I didn't make any mistakes I copied the source from a tutorial and ran it with the same results. The tutorial code I am using can be seen here.
My textures are .PNG format and power of two and I am loading from the R.drawable folder although I have tried some other locations such as drawable-nodpi as I have seen suggested.
I have also checked the result of glGenTextures which I have read can give odd values for certain phones but seems to be giving the correct values (1,2,3..).
Does anybody know why this could be happening or suggest some other checks I can do to figure out what is going wrong?
Here is a slightly modified version of the example code I linked above to keep things simple.
public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
gl10.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_FASTEST);
// Set the background colour to black ( rgba ).
gl10.glClearColor(0.0f, 0.0f, 0.0f, 1);
// Enable Flat Shading.
gl10.glShadeModel(GL10.GL_FLAT);
// We don't need to worry about depth testing!
gl10.glDisable(GL10.GL_DEPTH_TEST);
// Set OpenGL to optimise for 2D Textures
gl10.glEnable(GL10.GL_TEXTURE_2D);
// Disable 3D specific features.
gl10.glDisable(GL10.GL_DITHER);
gl10.glDisable(GL10.GL_LIGHTING);
gl10.glTexEnvx(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_MODULATE);
// Initial clear of the screen.
gl10.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Test for draw texture
// Test for device specific extensions
String extensions = gl10.glGetString(GL10.GL_EXTENSIONS);
boolean drawTexture = extensions.contains("draw_texture");
Log.i("OpenGL Support - ver.:",
gl10.glGetString(GL10.GL_VERSION) + " renderer:" +
gl10.glGetString(GL10.GL_RENDERER) + " : " +
(drawTexture ? "good to go!" : "forget it!!"));
// LOAD TEXTURE
mTextureName = new int[1];
// Generate Texture ID
gl10.glGenTextures(1, mTextureName, 0);
assert gl10.glGetError() == GL10.GL_NO_ERROR;
// Bind texture id / target (we want 2D of course)
gl10.glBindTexture(GL10.GL_TEXTURE_2D, mTextureName[0]);
// Open and input stream and read the image
InputStream is = mContext.getResources().openRawResource(R.drawable.asteroid);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch (IOException e) {
e.printStackTrace();
}
}
// Build our crop region to be the size of the bitmap (ie full image)
mCrop = new int[4];
mCrop[0] = 0;
mCrop[1] = imageHeight = bitmap.getHeight();
mCrop[2] = imageWidth = bitmap.getWidth();
mCrop[3] = -bitmap.getHeight();
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
assert gl10.glGetError() == GL10.GL_NO_ERROR;
bitmap.recycle();
}
public void onSurfaceChanged(GL10 gl10, int i, int i1) {
gl10.glViewport(0, 0, i, i1);
/*
* Set our projection matrix. This doesn't have to be done each time we
* draw, but usually a new projection needs to be set when the viewport
* is resized.
*/
float ratio = (float) i / i1;
gl10.glMatrixMode(GL10.GL_PROJECTION);
gl10.glLoadIdentity();
gl10.glFrustumf(-ratio, ratio, -1, 1, 1, 10);
}
public void onDrawFrame(GL10 gl) {
// Just clear the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Begin drawing
//--------------
// These function calls can be experimented with for various effects such as transparency
// although certain functionality maybe device specific.
gl.glShadeModel(GL10.GL_FLAT);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glColor4x(0x10000, 0x10000, 0x10000, 0x10000);
// Setup correct projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrthof(0.0f, mWidth, 0.0f, mHeight, 0.0f, 1.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glEnable(GL10.GL_TEXTURE_2D);
// Draw all Textures
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureName[0]);
((GL11)gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, mCrop, 0);
((GL11Ext)gl).glDrawTexfOES(0, 0, 0, imageWidth, imageHeight);
// Finish drawing
gl.glDisable(GL10.GL_BLEND);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glPopMatrix();
}
Have you tried to have all the images on drawable-nodpi and nowhere else?
I don't think it may be important but try these lines just before attaching the rendering.
glSurfaceView.setZOrderOnTop(true);
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
glSurfaceView.getHolder().setFormat(PixelFormat.RGBA_8888);
The error can be related with transparency and PNG format.
If it doesn't work, could you please paste the code related with GLSurfaceView and the Renderer?
Thanks!
The reason I'm asking this is that our app (The Elements) runs fine on a Droid and a Nexus One, our two test phones, but not correctly on our recently acquired Atrix 4G. What draws is a skewed version of what should draw, with all the colors being replaced with alternating lines of cyan, magenta, and yellow (approximately), which leads us to believe that one of the primary colors for the sand particles that should show up is missing based on which line it's on. I'm sorry for the unclear description, we had images but since this account doesn't have 10 reputation we couldn't post them.
Here is the code of our gl.c file, which does the texturing and rendering:
/*
* gl.c
* --------------------------
* Defines the gl rendering and initialization
* functions appInit, appDeinit, and appRender.
*/
#include "gl.h"
#include <android/log.h>
unsigned int textureID;
float vertices[] =
{0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f};
float texture[] =
{0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f};
unsigned char indices[] =
{0, 1, 3, 0, 3, 2};
int texWidth = 1, texHeight = 1;
void glInit()
{
//Set some properties
glShadeModel(GL_FLAT);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
//Generate the new texture
glGenTextures(1, &textureID);
//Bind the texture
glBindTexture(GL_TEXTURE_2D, textureID);
//Enable 2D texturing
glEnable(GL_TEXTURE_2D);
//Disable depth testing
glDisable(GL_DEPTH_TEST);
//Enable the vertex and coord arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Set tex params
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//Set up texWidth and texHeight texHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, emptyPixels);
//Free the dummy array
free(emptyPixels);
//Set the pointers
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texture);
}
void glRender()
{
//Check for changes in screen dimensions or work dimensions and handle them
if(dimensionsChanged)
{
vertices[2] = (float) screenWidth;
vertices[5] = (float) screenHeight;
vertices[6] = (float) screenWidth;
vertices[7] = (float) screenHeight;
texture[2] = (float) workWidth/texWidth;
texture[5] = (float) workHeight/texHeight;
texture[6] = (float) workWidth/texWidth;
texture[7] = (float) workHeight/texHeight;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if (!flipped)
{
glOrthof(0, screenWidth, screenHeight, 0, -1, 1); //--Device
}
else
{
glOrthof(0, screenWidth, 0, -screenHeight, -1, 1); //--Emulator
}
dimensionsChanged = FALSE;
zoomChanged = FALSE;
}
else if(zoomChanged)
{
texture[2] = (float) workWidth/texWidth;
texture[5] = (float) workHeight/texHeight;
texture[6] = (float) workWidth/texWidth;
texture[7] = (float) workHeight/texHeight;
zoomChanged = FALSE;
}
//__android_log_write(ANDROID_LOG_INFO, "TheElements", "updateview begin");
UpdateView();
//__android_log_write(ANDROID_LOG_INFO, "TheElements", "updateview end");
//Clear the screen
glClear(GL_COLOR_BUFFER_BIT);
//Sub the work portion of the tex
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, workWidth, workHeight, GL_RGB, GL_UNSIGNED_BYTE, colors);
//Actually draw the rectangle with the text on it
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
}
Any ideas as to what the difference is between the Atrix 4G and other phones in terms of OpenGL or why our app is doing what it is in general are much appreciated! Thanks in advance.
Here is an example of what it looks like: http://imgur.com/Oyw64
I know it's a bit late to reply, but the real reason you're seeing the mixed colors is due to OpenGL's scanline packing alignment -- not due to any driver bug or power-of-two sized texture issue.
Your glTexSubImage2D call is sending data in GL_RGB format, so I'm guessing your colors buffer is 3 bytes per pixel. Odds are the Droid and Nexus One phones have a default pack alignment of 1, but the Tegra 2 defaults to an alignment of 4. This means your 3 byte array can become misaligned with what the driver expects after every scanline, and a byte or two will be skipped for the next scanline, resulting in the colors you see. The reason why this works with a power-of-two sized texture is because your buffer just happens to be aligned properly for the next scanline. Basically this is the same issue as loading BMPs, where each scanline has to be padded to 4 bytes, regardless of the bit depth of the image.
You can explicitly disable any alignment packing by calling glPixelStorei(GL_PACK_ALIGNMENT, 1);. Note that changing this only affects the way OpenGL interprets your texture data, so there is no rendering performance penalty for changing this value. When the texture is sent to the graphics subsystem, the scanlines are stored in whatever format is optimal for the hardware, but the driver still has to know how to unpack your data properly. However, since you are changing the texture data every frame, instead of the driver being able to do one memcpy() to upload the entire texture, it will have to do TextureHeight *memcpy()s in order to upload it. This is not likely to be a major bottleneck, but if you are looking for the best performance, you may want to query the driver's default pack alignment on startup using glGetIntegerv(GL_PACK_ALIGNMENT, &align); and adjust your buffer accordingly at runtime.
Here's the specification on glPixelStore() for reference.
Ok, we finally found the actual problem. It turns out that glSubTexImage2D() actually requires the WIDTH to be a power of two, but not the height, for some GPUs including the Tegra 2. We though that it was only the texture that needed to be a power of two and that's where we were wrong. We're going to have to do a bit of recoding, but hopefully this will work out in the end (AT LAST!!).
The Atrix 4G is the first prominent phone that uses Nvidia's Tegra GPU. As such, it has an entirely different OpenGL implementation than previous Android devices. Either you are observing a bug Tegra hardware+software combination, or your application was relying on undefined behavior and you were getting lucky on other devices.
You may want to file a bug report with Nvidia.