I started writing a game for Android using OpenGL-ES and just finished the draw code which uses the glDrawTexfOES extension. When I tested it on the emulator it works fine but testing it on my Samsung Galaxy S2 it seems all the textures are drawn white.
To make sure I didn't make any mistakes I copied the source from a tutorial and ran it with the same results. The tutorial code I am using can be seen here.
My textures are .PNG format and power of two and I am loading from the R.drawable folder although I have tried some other locations such as drawable-nodpi as I have seen suggested.
I have also checked the result of glGenTextures which I have read can give odd values for certain phones but seems to be giving the correct values (1,2,3..).
Does anybody know why this could be happening or suggest some other checks I can do to figure out what is going wrong?
Here is a slightly modified version of the example code I linked above to keep things simple.
public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
gl10.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_FASTEST);
// Set the background colour to black ( rgba ).
gl10.glClearColor(0.0f, 0.0f, 0.0f, 1);
// Enable Flat Shading.
gl10.glShadeModel(GL10.GL_FLAT);
// We don't need to worry about depth testing!
gl10.glDisable(GL10.GL_DEPTH_TEST);
// Set OpenGL to optimise for 2D Textures
gl10.glEnable(GL10.GL_TEXTURE_2D);
// Disable 3D specific features.
gl10.glDisable(GL10.GL_DITHER);
gl10.glDisable(GL10.GL_LIGHTING);
gl10.glTexEnvx(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_MODULATE);
// Initial clear of the screen.
gl10.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Test for draw texture
// Test for device specific extensions
String extensions = gl10.glGetString(GL10.GL_EXTENSIONS);
boolean drawTexture = extensions.contains("draw_texture");
Log.i("OpenGL Support - ver.:",
gl10.glGetString(GL10.GL_VERSION) + " renderer:" +
gl10.glGetString(GL10.GL_RENDERER) + " : " +
(drawTexture ? "good to go!" : "forget it!!"));
// LOAD TEXTURE
mTextureName = new int[1];
// Generate Texture ID
gl10.glGenTextures(1, mTextureName, 0);
assert gl10.glGetError() == GL10.GL_NO_ERROR;
// Bind texture id / target (we want 2D of course)
gl10.glBindTexture(GL10.GL_TEXTURE_2D, mTextureName[0]);
// Open and input stream and read the image
InputStream is = mContext.getResources().openRawResource(R.drawable.asteroid);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch (IOException e) {
e.printStackTrace();
}
}
// Build our crop region to be the size of the bitmap (ie full image)
mCrop = new int[4];
mCrop[0] = 0;
mCrop[1] = imageHeight = bitmap.getHeight();
mCrop[2] = imageWidth = bitmap.getWidth();
mCrop[3] = -bitmap.getHeight();
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
assert gl10.glGetError() == GL10.GL_NO_ERROR;
bitmap.recycle();
}
public void onSurfaceChanged(GL10 gl10, int i, int i1) {
gl10.glViewport(0, 0, i, i1);
/*
* Set our projection matrix. This doesn't have to be done each time we
* draw, but usually a new projection needs to be set when the viewport
* is resized.
*/
float ratio = (float) i / i1;
gl10.glMatrixMode(GL10.GL_PROJECTION);
gl10.glLoadIdentity();
gl10.glFrustumf(-ratio, ratio, -1, 1, 1, 10);
}
public void onDrawFrame(GL10 gl) {
// Just clear the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Begin drawing
//--------------
// These function calls can be experimented with for various effects such as transparency
// although certain functionality maybe device specific.
gl.glShadeModel(GL10.GL_FLAT);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glColor4x(0x10000, 0x10000, 0x10000, 0x10000);
// Setup correct projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrthof(0.0f, mWidth, 0.0f, mHeight, 0.0f, 1.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glEnable(GL10.GL_TEXTURE_2D);
// Draw all Textures
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureName[0]);
((GL11)gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, mCrop, 0);
((GL11Ext)gl).glDrawTexfOES(0, 0, 0, imageWidth, imageHeight);
// Finish drawing
gl.glDisable(GL10.GL_BLEND);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glPopMatrix();
}
Have you tried to have all the images on drawable-nodpi and nowhere else?
I don't think it may be important but try these lines just before attaching the rendering.
glSurfaceView.setZOrderOnTop(true);
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
glSurfaceView.getHolder().setFormat(PixelFormat.RGBA_8888);
The error can be related with transparency and PNG format.
If it doesn't work, could you please paste the code related with GLSurfaceView and the Renderer?
Thanks!
Related
Here is my problem:
I have a GLSurfaceView with Renderer and stuff. Everything works just as I wanted, on older Android versions. But on newer versions (I guess > 4.X) it just shows a black screen without any Bitmaps. For example if I use gl.glClearColor(0.1f, 0.2f, 0.3f, 0.5f); in my onSurfaceCreated method, it changed from black to the color. So I think the problem must be the camera looking in the wrong direction or something, because the background-color is drawn.
Since I am pretty new to OpenGL, I wanted to ask if there are any connections between Android versions and the OpenGL-camera or something like that?
Many people say my Bitmap-Sizes have to be powers of 2, but it doesnt solve anything.
Here is my Renderer:
public class GlRenderer implements Renderer {
#Override
public void onDrawFrame(GL10 gl) {
// clear Screen Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
// Reset the Modelview Matrix
gl.glLoadIdentity();
gl.glTranslatef(0.0f, 0.0f, -5.0f); // move 5 units INTO the screen
// is the same as moving the camera 5 units away
updateLogic(gl);
drawEverything(gl);
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION); // or some matrix uniform if using shaders
gl.glLoadIdentity();
gl.glOrthof(0, width, height, 0, -1, 1); // this will allow to pass vertices in 'canvas pixel' coordinates
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glDisable(GL10.GL_DITHER);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_FASTEST);
gl.glEnable(GL10.GL_TEXTURE_2D); //Enable Texture Mapping ( NEW )
gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); //Set Background
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
}
}
Not sure if this is the issue you are having, but try changing the following line:
gl.Orthof(0, widht, height, 0, -1, 1);
to
gl.Orthof(0, widht, height, 0, 1, -1);
Notice that the near/far values are inverted. See this for a description of this maddness :)
I recently was porting my own 2D rendering engine to Android by using Mono for Android
Everything went well except I cannot draw the texture at all, all textures are looking like empty.
So I started a new clean project to test it, and I wrote it according to the sample code for java, what I found out is that I cannot draw the texture either.
Here is the code I used for testing:
class GLView1 : AndroidGameView
{
public GLView1(Context context)
: base(context)
{
this.GLContextVersion = OpenTK.Graphics.GLContextVersion.Gles1_1;
}
// This gets called when the drawing surface is ready
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
// Run the render loop
Run();
}
bool initialized = false;
int[] textureID = new int[1];
// This gets called on each frame render
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
if (!initialized)
{
initialized = true;
GL.Enable(All.Texture2D);
GL.GenTextures(1, textureID);
GL.ShadeModel(All.Smooth);
GL.BindTexture(All.Texture2D, textureID[0]);
Bitmap bitmap = BitmapFactory.DecodeResource(Resources, Resource.Drawable.Icon);
Android.Opengl.GLUtils.TexImage2D(Android.Opengl.GLES10.GlTexture2d, 0, Android.Opengl.GLES10.GlRgba, bitmap, 0);
bitmap.Recycle();
GL.TexParameter(All.Texture2D, All.TextureMinFilter, (int)All.Nearest);
GL.TexParameter(All.Texture2D, All.TextureMagFilter, (int)All.Linear);
}
GL.ClearColor(0.5f, 0.5f, 0.5f, 1.0f);
GL.Clear((uint)All.ColorBufferBit);
GL.BindTexture(All.Texture2D, textureID[0]);
GL.EnableClientState(All.VertexArray);
GL.EnableClientState(All.TextureCoordArray);
GL.FrontFace(All.Cw);
GL.VertexPointer(2, All.Float, 0, square_vertices);
GL.TexCoordPointer(2, All.Float, 0, uv);
GL.DrawArrays(All.TriangleStrip, 0, 4);
SwapBuffers();
}
float[] uv ={
0,1,
0,0,
1,1,
1,0,
};
float[] square_vertices = {
-0.5f, -0.5f,
0.5f, -0.5f,
-0.5f, 0.5f,
0.5f, 0.5f,
};
byte[] square_colors = {
255, 255, 0, 255,
0, 255, 255, 255,
0, 0, 0, 0,
255, 0, 255, 255,
};
}
What I saw on my Android device is just a big white square.
PS: I tested it again with Android Emulator, and it seems this code works in android emulator, but on my real device it is still showing a big white square.
What am I doing wrong here?
It could be a problem with BitmapFactory.decodeResource(). I was getting the same issue, but when I use openRawResource() to open the resource as a stream and then use BitmapFactory.decodeStream() to load the bitmap then the textures suddenly appear.
Looking further at BitmapFactory.decodeResource(), it seems to scale based on the target device's pixel density. See this: http://blog.poweredbytoast.com/loading-opengl-textures-in-android
This would cause all your neat power-of-2-sized textures to be funny sized that the hardware acceleration doesn't recognise (the emulator doesn't care about non-power-of-2 sizes). So, use the code from the link above, copied here for your convenience:
BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inScaled = false;
I contact because, I try to use openGL with android, in order to make a 2D game :)
Here is my way of working:
I have a class GlRender
public class GlRenderer implements Renderer
In this class, on onDrawFrame I do
GameRender() and GameDisplay()
And on gameDisplay() I have:
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Reset the Modelview Matrix
gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Modelview Matrix
gl.glLoadIdentity(); //Reset The Modelview Matrix
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
for(Sprites...)
{
sprite.draw(gl, att.getX(), att.getY());
}
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
And in the draw method of sprite I have:
_vertices[0] = x;
_vertices[1] = y;
_vertices[3] = x;
_vertices[4] = y + height;
_vertices[6] = x + width;
_vertices[7] = y;
_vertices[9] = x + width;
_vertices[10] = y + height;
if(vertexBuffer != null)
{
vertexBuffer.clear();
}
// fill the vertexBuffer with the vertices
vertexBuffer.put(_vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer.mByteBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer.mByteBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, _vertices.length / 3);
My problem is that I have a low frame rate, even at 30 FPS I loose some frame sometimes with only 1 sprite (but it is the same with 50)
Am I doing something wrong? How can I improve FPS?
In general, you should not be changing your vertex buffer for every sprite drawn. And by "in general", I pretty much mean "never," unless you're making a particle system. And even then, you would use proper streaming techniques, not write a quad at a time.
For each sprite, you have a pre-built quad. To render it, you use shader uniforms to transform the sprite from a neutral position to the actual position you want to see it on screen.
I am examining an interesting problem I'm facing with OpenGL lighting on Android. I'm working on a 3D Viewer where you can add and manipulate 3D objects. You can also set a light with different attributes. The problem I was facing with my Viewer was that the highlight on the 3D objects from the light (it is a point light) behaved strangely. If the light source was in the exact same point as the camera, the highlight would move in the opposite direction you would expect. (So if you move the object to the left, the highlight moves to the leftedge of the object as well, instead of the right, which is what I was expecting.)
So to further narrow the problem down I've created a small sample application that only renders a square and then I rotate that square around the camera position (the origin), which is also where the light is placed. This should result in all squares facing the camera directly, so that they would be completely highlighted. The result though looked like that:
Can it be that these artifacts appear because of the distortion you get on the border due to the projection?
In the first image the distance between the sphere and the camera is about 20 units and the size of the sphere is about 2. If I move the light closer to the object the highlight looks a lot better, in the way I'm expecting it.
In the second image the radius in which the squares are located is 25 units.
I'm using OpenGL ES 1.1 (since I was struggling to get it to work with shaders in ES 2.0) on Android 3.1
Here is some of the code I'm using:
public void onDrawFrame(GL10 gl) {
// Setting the camera
GLU.gluLookAt(gl, 0, 0, 0, 0f, 0f, -1f, 0f, 1.0f, 0.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
for (int i = 0; i < 72; i++) {
gl.glPushMatrix();
gl.glRotatef(5f * i, 0, 1, 0);
gl.glTranslatef(0, 0, -25);
draw(gl);
gl.glPopMatrix();
}
}
public void draw(GL10 gl) {
setMaterial(gl);
gl.glEnable(GL10.GL_NORMALIZE);
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glFrontFace(GL10.GL_CCW);
// Enable the vertex and normal state
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES, mIndexBuffer.capacity(), GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_NORMAL_ARRAY);
}
// Setting the light
private void drawLights(GL10 gl) {
// Point Light
float[] position = { 0, 0, 0, 1 };
float[] diffuse = { .6f, .6f, .6f, 1f };
float[] specular = { 1, 1, 1, 1 };
float[] ambient = { .2f, .2f, .2f, 1 };
gl.glEnable(GL10.GL_LIGHTING);
gl.glEnable(GL10.GL_LIGHT0);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glLightfv(GL10.GL_LIGHT0, GL_POSITION, position, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_DIFFUSE, diffuse, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_AMBIENT, ambient, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_SPECULAR, specular, 0);
}
private void setMaterial(GL10 gl) {
float shininess = 30;
float[] ambient = { 0, 0, .3f, 1 };
float[] diffuse = { 0, 0, .7f, 1 };
float[] specular = { 1, 1, 1, 1 };
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse, 0);
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, ambient, 0);
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular, 0);
gl.glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, shininess);
}
I'm setting the light in the beginning, when the activity is started (in onSurfaceCreated) and the material everytime I draw a square.
The effect in your second example (with the squares) is rather due to the default non-local viewer that OpenGL uses. By default the eye-space view vector (the vector from vertex to camera, used for the specular highlight computation) is just taken to be the (0, 0, 1)-vector, instead of the normalized vertex position. This approximation is only correct if the vertex is in the middle of the screen, but gets more and more incorrect the farther you move to the boundary of the srceen.
To change this and let OpenGL use the real vector from the vertex to the camera, just use the glLightModel function, especially
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
I'm not sure if this is also the cause for your first problem (with the sphere), but maybe, just try it.
EDIT: It seems you cannot use GL_LIGHT_MODEL_LOCAL_VIEWER in OpenGL ES. In this case there is no way around this problem, except switching to OpenGL ES 2.0 and doing all lighting computations yourself, of course.
Your light is probably moving when you're moving your object.
Take a look at this answer http://www.opengl.org/resources/faq/technical/lights.htm#ligh0050
I've been beating my head against the desk trying to figure this out for days now, and after scouring Stack Overflow and the web, I haven't found any examples that have worked for me. I've finally got code that's seems close, so maybe you guys (and gals?) can help me figure this out.
My first problem is that I'm trying to implement a motion blur by taking a screen grab as a texture, then drawing the texture over the next frame with transparency -- or use more frames for more blur. (If anyone's interested, this is the guide I followed: http://www.codeproject.com/KB/openGL/MotionBlur.aspx)
I've got the screen saving as a texture working fine. The issue I'm having is drawing in Ortho mode on top of the screen. After much head banging, I finally got a basic square drawing, but my lack of OpenGL ES understanding and an easy to follow example are holding me back now. I need to take the texture I saved, and draw it into the square I drew. Nothing I've been doing seems to work.
Also, my second problem, is drawing more complex 3d models into Ortho mode. I can't seem to get any models to draw. I'm using the (slightly customized) min3d framework (http://code.google.com/p/min3d/), and I'm trying to draw Object3d's in Ortho mode just like I draw them in Perspective mode. As I understand it, they should draw the same, they should just not have depth. Yet I don't seem to see them at all.
Here's the code I'm working with. I've tried a ton of different things and this is the closest I've gotten (actually drawing something on the screen that can be seen). I still have no idea how to get a proper 3d model drawing in the ortho view. I'm sure I'm doing something horribly wrong and probably completely misunderstanding some basic aspects of OpenGL drawing. Let me know if there's any other code I need to post.
// Gets called once, before all drawing occurs
//
private void reset()
{
// Reset TextureManager
Shared.textureManager().reset();
// Do OpenGL settings which we are using as defaults, or which we will not be changing on-draw
// Explicit depth settings
_gl.glEnable(GL10.GL_DEPTH_TEST);
_gl.glClearDepthf(1.0f);
_gl.glDepthFunc(GL10.GL_LESS);
_gl.glDepthRangef(0,1f);
_gl.glDepthMask(true);
// Alpha enabled
_gl.glEnable(GL10.GL_BLEND);
_gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
// "Transparency is best implemented using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
// with primitives sorted from farthest to nearest."
// Texture
_gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); // (OpenGL default is GL_NEAREST_MIPMAP)
_gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // (is OpenGL default)
// CCW frontfaces only, by default
_gl.glFrontFace(GL10.GL_CCW);
_gl.glCullFace(GL10.GL_BACK);
_gl.glEnable(GL10.GL_CULL_FACE);
// Disable lights by default
for (int i = GL10.GL_LIGHT0; i < GL10.GL_LIGHT0 + NUM_GLLIGHTS; i++) {
_gl.glDisable(i);
}
//
// Scene object init only happens here, when we get GL for the first time
//
}
// Called every frame
//
protected void drawScene()
{
if(_scene.fogEnabled() == true) {
_gl.glFogf(GL10.GL_FOG_MODE, _scene.fogType().glValue());
_gl.glFogf(GL10.GL_FOG_START, _scene.fogNear());
_gl.glFogf(GL10.GL_FOG_END, _scene.fogFar());
_gl.glFogfv(GL10.GL_FOG_COLOR, _scene.fogColor().toFloatBuffer() );
_gl.glEnable(GL10.GL_FOG);
} else {
_gl.glDisable(GL10.GL_FOG);
}
// Sync all of the object drawing so that updates in the mover
// thread can be synced if necessary
synchronized(Renderer.SYNC)
{
for (int i = 0; i < _scene.children().size(); i++)
{
Object3d o = _scene.children().get(i);
if(o.animationEnabled())
{
((AnimationObject3d)o).update();
}
drawObject(o);
}
}
//
//
//
// Draw the blur
// Set Up An Ortho View
_switchToOrtho();
_drawMotionBlur();
// Switch back to the previous view
_switchToPerspective();
_saveScreenToTexture("blur", 512);
}
private void _switchToOrtho()
{
// Set Up An Ortho View
_gl.glDisable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
_gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
//_gl.glOrthof(0f, 480f, 0f, 800f, -100f, 100f);
_gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
}
private void _switchToPerspective()
{
// Switch back to the previous view
_gl.glEnable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION);
_gl.glPopMatrix();
_gl.glMatrixMode(GL10.GL_MODELVIEW);
_gl.glPopMatrix(); // Pop The Matrix
}
private void _saveScreenToTexture(String $textureId, int $size)
{
// Save the screen as a texture
_gl.glViewport(0, 0, $size, $size);
_gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureManager.getGlTextureId($textureId));
_gl.glCopyTexImage2D(GL10.GL_TEXTURE_2D,0,GL10.GL_RGB,0,0,512,512,0);
_gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
_gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
_gl.glViewport(0, 0, 480, 800);
}
private void _drawMotionBlur()
{
// Vertices
float squareVertices[] = {
-3f, 0f, // Bottom Left
475f, 0f, // Bottom Right
475f, 800f, // Top Right
-3f, 800f // Top Left
};
ByteBuffer vbb = ByteBuffer.allocateDirect(squareVertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
FloatBuffer vertexBuffer = vbb.asFloatBuffer();
vertexBuffer.put(squareVertices);
vertexBuffer.position(0);
//
//
// Textures
FloatBuffer textureBuffer; // buffer holding the texture coordinates
float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(squareVertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
//
//
//
_gl.glLineWidth(3.0f);
_gl.glTranslatef(5.0f, 0.0f, 0.0f);
_gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
_gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
//_gl.glTranslatef(100.0f, 0.0f, 0.0f);
//_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
//_gl.glTranslatef(100.0f, 0.0f, 0.0f);
//_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
_gl.glEnable(GL10.GL_TEXTURE_2D);
_gl.glEnable(GL10.GL_BLEND);
_gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
_gl.glLoadIdentity();
//
//
//
_gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureManager.getGlTextureId("blur"));
_gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
_gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
//
//
//
_gl.glDisable(GL10.GL_BLEND);
_gl.glDisable(GL10.GL_TEXTURE_2D);
_gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
EDIT: Here's a simpler example, it's all in the one function and doesn't include any of the saving the screen to a texture stuff. This is just drawing a 3d scene, switching to Ortho, drawing a square with a texture, then switching back to perspective.
// Called every frame
//
protected void drawScene()
{
// Draw the 3d models in perspective mode
// This part works (uses min3d) and draws a 3d scene
//
for (int i = 0; i < _scene.children().size(); i++)
{
Object3d o = _scene.children().get(i);
if(o.animationEnabled())
{
((AnimationObject3d)o).update();
}
drawObject(o);
}
// Set Up The Ortho View to draw a square with a texture
// over the 3d scene
//
_gl.glDisable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
_gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
_gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
// Draw A Square With A Texture
// (Assume that the texture "blur" is already created properly --
// it is as I can use it when drawing my 3d scene if I apply it
// to one of the min3d objects)
//
float squareVertices[] = {
-3f, 0f, // Bottom Left
475f, 0f, // Bottom Right
475f, 800f, // Top Right
-3f, 800f // Top Left
};
ByteBuffer vbb = ByteBuffer.allocateDirect(squareVertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
FloatBuffer vertexBuffer = vbb.asFloatBuffer();
vertexBuffer.put(squareVertices);
vertexBuffer.position(0);
FloatBuffer textureBuffer; // buffer holding the texture coordinates
float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(squareVertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
_gl.glLineWidth(3.0f);
_gl.glTranslatef(5.0f, 0.0f, 0.0f);
_gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
_gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
_gl.glEnable(GL10.GL_TEXTURE_2D);
_gl.glEnable(GL10.GL_BLEND);
_gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
_gl.glLoadIdentity();
_gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureManager.getGlTextureId("blur"));
_gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
_gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
_gl.glDisable(GL10.GL_BLEND);
_gl.glDisable(GL10.GL_TEXTURE_2D);
_gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Switch Back To The Perspective Mode
//
_gl.glEnable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION);
_gl.glPopMatrix();
_gl.glMatrixMode(GL10.GL_MODELVIEW);
_gl.glPopMatrix(); // Pop The Matrix
}
EDIT2: Thanks to Christian's answer, I removed the second glVertexPointer and _gl.glBlendFunc (GL10.GL_ONE, GL10.GL_ONE); (I deleted them from the sample code above as well so it wouldn't confuse the question). I now have a texture rendering, but only in one of the triangles that make up the square. So I'm seeing a triangle in the left portion of the screen that has the texture applied. Why is it not being applied to both halves of the square? I think it's because I have only one of these calls: gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); so I'm literally only drawing one triangle.
First, you set the blend function to (GL_ONE, GL_ONE), which will just add the blur texture to the framebuffer and make the whole scene overbright. You probalby want to use (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), but then you have to make sure your blur texture has the correct alpha, by configuring the texture environment to use a constant value for the alpha (instead of the texture's) or use GL_MODULATE with a (1,1,1,0.5) coloured square. Alternatively use a fragment shader.
Second, you specify a size 3 in the second call to glVertexPointer, but your data are 2d vectors (the first call is right).
glOrtho is not neccessarily 2D, its just a camera without perspective distortion (farther objects don't get smaller). The parameters to glOrtho specify your screen plane size in view coordinates. Thus if your scene covers the world in the unit cube, an ortho of 480x800 is just too large (this is no problem if you draw other objects than in perspective, as your square or UI elements, but when you want to draw your same 3d objects the scales have to match). Another thing is that in ortho the near and far distances still matter, everything that falls out is clipped away. So if your camera is at (0,0,0) and you view along -z with a glOrtho of (0,480,0,800,-1,1), you will only see those objects that intersect the (0,0,-1)-(480,800,1)-box.
So keep in mind, that glOrtho and glFrustum (or gluPerspective) all define a 3d viewing volume. In ortho its a box and in frustum its, guess a frustum (capped pyramid). consult some more introductory texts on transformations and viewing if this was not clear enough.