Texture Loading Problems with OpenGL in Android - android

Okay, let me preface this by saying that I am very new to OpenGL as it pertains to Android, and while I've been reading up on it for some time, I cannot get past this roadblock in my coding.
Currently I am trying to write a class to load textures from .png file located in my drawables folder onto a .obj model that I made in Blender. I did a UV unwrap on the Blender model, and then used the uv unwrap as a guide for the .png file.
The issue currently is that I am able to load the texture onto the model, but it is one solid color which seems to be coming from the texture file. Clearly I don't understand enough about UV texturing in Blender, but there are so many different OpenGL libraries, and so much variation from PC to Android that it's really hard to wrap my head around what works where.
I would be extremely grateful if somebody could help me with this. Here's some of the relevant code, I'll post more as necessary:
from TextureLoader:
public Texture getTexture(GL10 gl, final int ref) throws IOException {
Texture tex = (Texture) table.get(ref);
if (tex != null) {
return tex;
}
Log.i("Textures:", "Loading texture: " + ref);
tex = getTexture(gl, ref,
GL10.GL_TEXTURE_2D, // target
GL10.GL_RGBA, // dst pixel format
GL10.GL_LINEAR, // min filter (unused)
GL10.GL_NEAREST);
table.put(ref,tex);
return tex;
}
public Texture getTexture(GL10 gl, final int ref,
int target,
int dstPixelFormat,
int minFilter,
int magFilter) throws IOException {
if (!sReady) {
throw new RuntimeException("Texture Loader not prepared");
}
int srcPixelFormat = 0;
// create the texture ID for this texture
int id = createID(gl);
Texture texture = new Texture(target, id);
// bind this texture
gl.glBindTexture(target, id);
Bitmap bitmap = loadImage(ref);
texture.setWidth(bitmap.getWidth());
texture.setHeight(bitmap.getHeight());
if (bitmap.hasAlpha()) {
srcPixelFormat = GL10.GL_RGBA;
} else {
srcPixelFormat = GL10.GL_RGB;
}
// convert that image into a byte buffer of texture data
ByteBuffer textureBuffer = convertImageData(bitmap);
if (target == GL10.GL_TEXTURE_2D) {
gl.glTexParameterf(target, GL10.GL_TEXTURE_MIN_FILTER, minFilter);
gl.glTexParameterf(target, GL10.GL_TEXTURE_MAG_FILTER, magFilter);
}
GLUtils.texImage2D(target, 0, bitmap, 0);
/*gl.glTexImage2D(target,
0,
dstPixelFormat,
get2Fold(bitmap.getWidth()),
get2Fold(bitmap.getHeight()),
0,
srcPixelFormat,
GL10.GL_UNSIGNED_BYTE,
textureBuffer);*/
bitmap.recycle();
return texture;
}
/**
* Get the closest greater power of 2 to the fold number
*
* #param fold The target number
* #return The power of 2
*/
private int get2Fold(int fold) {
int ret = 2;
while (ret < fold) {
ret *= 2;
}
return ret;
}
/**
* Convert the buffered image to a texture
*
* #param bufferedImage The image to convert to a texture
* #param texture The texture to store the data into
* #return A buffer containing the data
*/
private ByteBuffer convertImageData(Bitmap bitmap) {
ByteBuffer imageBuffer = null;
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] data = stream.toByteArray();
imageBuffer = ByteBuffer.allocateDirect(data.length);
imageBuffer.order(ByteOrder.nativeOrder());
imageBuffer.put(data, 0, data.length);
imageBuffer.flip();
return imageBuffer;
}
/**
* Creates an integer buffer to hold specified ints
* - strictly a utility method
*
* #param size how many int to contain
* #return created IntBuffer
*/
protected IntBuffer createIntBuffer(int size) {
ByteBuffer temp = ByteBuffer.allocateDirect(4 * size);
temp.order(ByteOrder.nativeOrder());
return temp.asIntBuffer();
}
private Bitmap loadImage(int ref) {
Bitmap bitmap = null;
Matrix flip = new Matrix();
flip.postScale(1f, -1f);
// This will tell the BitmapFactory to not scale based on the device's pixel density:
BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inScaled = false;
Bitmap temp = BitmapFactory.decodeResource(sContext.getResources(), ref, opts);
bitmap = Bitmap.createBitmap(temp, 0, 0, temp.getWidth(), temp.getHeight(), flip, true);
temp.recycle();
return bitmap;
}
from Texture:
public void bind(GL10 gl) {
gl.glBindTexture(target, textureID);
gl.glEnable(GL10.GL_TEXTURE_2D);
}
as it is called:
public void render() {
//Clear Screen And Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_LIGHTING);
gl.glPushMatrix();
gl.glTranslatef(0.0f, -1.2f, z); //Move down 1.2 Unit And Into The Screen 6.0
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
texture.bind(gl);
model.draw(gl);
gl.glPopMatrix();
}

Related

OpenGL- Render to texture- a specific area of screen

I implemented FBO on my OpenGL Game. and Im rendering what is rendered to screen to a texture, the problem is that rendering to texture starts from lower left corner. look:
what is rendered to Default Frame Buffer:
what is rendered to texture attached to FBO:
But Where i want to be Rendered to Texture is:
how can i do this? here is the renderer Calass (the FBO operation is done in onDrawFrame function):
public class CurlRenderer implements GLSurfaceView.Renderer {
// Constant for requesting right page rect.
public static final int PAGE = 1;
// Set to true for checking quickly how perspective projection looks.
private static final boolean USE_PERSPECTIVE_PROJECTION = false;
// Background fill color.
private int mBackgroundColor;
// Curl meshes used for static and dynamic rendering.
private CurlMesh mCurlMesh;
private RectF mMargins = new RectF();
private CurlRenderer.Observer mObserver;
// Page rectangles.
private RectF mPageRect;
// View mode.
// Screen size.
private int mViewportWidth, mViewportHeight;
// Rect for render area.
private RectF mViewRect = new RectF();
private boolean first = true;
int[] fb, renderTex;
int texW = 300;
int texH = 256;
IntBuffer texBuffer;
int[] buf = new int[texW * texH];
GL11ExtensionPack gl11ep ;
/**
* Basic constructor.
*/
public CurlRenderer(CurlRenderer.Observer observer) {
mObserver = observer;
mCurlMesh = new CurlMesh(0);
mPageRect = new RectF();
}
/**
* Adds CurlMesh to this renderer.
*/
public synchronized void addCurlMesh(CurlMesh mesh) {
mCurlMesh = mesh;
}
/**
* Returns rect reserved for left or right page. Value page should be
* PAGE_LEFT or PAGE_RIGHT.
*/
public RectF getPageRect(int page) {
if (page == PAGE) {
return mPageRect;
}
return null;
}
public void setup(GL10 gl){
fb = new int[1];
renderTex = new int[1];
// generate
((GL11ExtensionPack)gl).glGenFramebuffersOES(1, fb, 0);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glGenTextures(1, renderTex, 0);// generate texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
// texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer();
// gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,GL10.GL_MODULATE);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, texW, texH, 0, GL10.GL_RGBA, GL10.GL_UNSIGNED_SHORT_4_4_4_4, null);
gl.glDisable(GL10.GL_TEXTURE_2D);
}
boolean RenderStart(GL10 gl){
// Bind the framebuffer
((GL11ExtensionPack)gl).glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]);
// specify texture as color attachment
((GL11ExtensionPack)gl).glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0);
int error = gl.glGetError();
if (error != GL10.GL_NO_ERROR) {
Log.d("err", "Background Load GLError: " + error+" ");
}
int status = ((GL11ExtensionPack)gl).glCheckFramebufferStatusOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES);
if (status != GL11ExtensionPack.GL_FRAMEBUFFER_COMPLETE_OES)
{
Log.d("err", "Background Load GLError: " + status+" ");;
return true;
}
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
return true;
}
void RenderEnd(GL10 gl){
((GL11ExtensionPack)gl).glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, 0);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]);
gl.glColor4f(1,1,1,1);
gl.glDisable(GL10.GL_TEXTURE_2D);
}
#Override
public synchronized void onDrawFrame(GL10 gl) {
if(first){
int h = GLES20.glGetError();
this.setup(gl);
if(h!=0){
Log.d("ERROR", "ERROR Happend"+h+"");
}
first = false;
}
mObserver.onDrawFrame();
//glClearColor miad rangi ke maa entekhaab kardim ro tooye carde Graphic register mikone
gl.glClearColor(Color.red(mBackgroundColor) / 255f,
Color.green(mBackgroundColor) / 255f,
Color.blue(mBackgroundColor) / 255f,
Color.alpha(mBackgroundColor) / 255f);
//glClear miad oon rangi ke bala register karde boodim ro dige az buffer paak mikone
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
//miad matris ro be MabdaEsh barmigardoone, ke bAd baraye glRotate va glTranslate moshkeli ijaad nashe
//chon maa asle jaabejaa kardan hamoon baraye safhe, baste be makaane avalieye
// kaaghazemoon hast, na oon makani ke dar haale hazer gharaar dare
gl.glLoadIdentity();
if (USE_PERSPECTIVE_PROJECTION) {
gl.glTranslatef(0, 0, -6f);
}
RenderStart(gl);
mCurlMesh.onDrawFrame(gl);
RenderEnd(gl);
mCurlMesh.onDrawFrame(gl);
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
mViewportWidth = width;
mViewportHeight = height;
float ratio = (float) width / height;
mViewRect.top = 1.0f;
mViewRect.bottom = -1.0f;
mViewRect.left = -ratio;
mViewRect.right = ratio;
updatePageRects();
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
if (USE_PERSPECTIVE_PROJECTION) {
GLU.gluPerspective(gl, 20f, (float) width / height, .1f, 100f);
} else {
GLU.gluOrtho2D(gl, mViewRect.left, mViewRect.right,
mViewRect.bottom, mViewRect.top);
}
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// mCurlMesh.setup(gl);
gl.glClearColor(0f, 0f, 0f, 1f);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
gl.glHint(GL10.GL_LINE_SMOOTH_HINT, GL10.GL_NICEST);
//gl.glHint(GL10.GL_POLYGON_SMOOTH_HINT, GL10.GL_NICEST);
gl.glEnable(GL10.GL_LINE_SMOOTH);
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glDisable(GL10.GL_CULL_FACE);
}
/**
* Change background/clear color.
*/
public void setBackgroundColor(int color) {
mBackgroundColor = color;
}
/**
* Set margins or padding. Note: margins are proportional. Meaning a value
* of .1f will produce a 10% margin.
*/
public synchronized void setMargins(float left, float top, float right,
float bottom) {
mMargins.left = left;
mMargins.top = top;
mMargins.right = right;
mMargins.bottom = bottom;
updatePageRects();
}
/**
* Translates screen coordinates into view coordinates.
* mokhtassate ye noghte (masalan pointer Position) roye safhe ro, be moAdele mokhtasaatesh
* rooye CurlView Tabdil mikene
*/
public void translate(PointF pt) {
pt.x = mViewRect.left + (mViewRect.width() * pt.x / mViewportWidth);
pt.y = mViewRect.top - (-mViewRect.height() * pt.y / mViewportHeight);
}
/**
* Recalculates page rectangles.
*/
private void updatePageRects() {
if (mViewRect.width() == 0 || mViewRect.height() == 0) {
return;
}
/**
* # TODO inja daghighan hamnoon kaari ke mikham, yAni size dadan be Page ro anjaam mide
* mpageRect... khode meshe va mViewRect view E layout
*/
mPageRect.set(mViewRect);
mPageRect.left += mViewRect.width() * mMargins.left;
mPageRect.right -= mViewRect.width() * mMargins.right;
mPageRect.top += mViewRect.height() * mMargins.top;
mPageRect.bottom -= mViewRect.height() * mMargins.bottom;
int bitmapW = (int) ((mPageRect.width() * mViewportWidth) / mViewRect.width());
int bitmapH = (int) ((mPageRect.height() * mViewportHeight) / mViewRect.height());
mObserver.onPageSizeChanged(bitmapW, bitmapH);
}
/**
* Observer for waiting render engine/state updates.
*/
public interface Observer {
/**
* Called from onDrawFrame called before rendering is started. This is
* intended to be used for animation purposes.
*/
public void onDrawFrame();
/**
* Called once page size is changed. Width and height tell the page size
* in pixels making it possible to update textures accordingly.
*/
public void onPageSizeChanged(int width, int height);
}
}
You're missing to set the viewport for the FBO rendering. If you just wanted to draw the same part of the geometry as you draw to the default framebuffer, you would use the texture size for the viewport dimensions:
glViewport(0, 0, texW, texH);
Don't forget to set the viewport back to the appropriate size of the view/surface when you're done with FBO rendering, and start rendering to the default framebuffer again.
To draw a different (sub-)section of the geometry, as indicated in your sketch, you have a few options:
Use a modelview transformation to translate/scale the geometry.
Adjust the projection transformation.
Adjust the viewport.
The results from using any of these may be slightly different, depending on what and how you render. Particularly if lighting is involved, or a perspective projection, not all options will give exactly the same result. In that case, you'll have to decide which behavior you want.
Changing one of the transformations is probably the most standard approach. But adjusting the viewport can be an elegant alternative, depending on what exactly you're trying to achieve.
For example, just roughly guessing the values based on your sketch, you could use:
glViewport(texW / 4, -texH / 4, texW / 2, texH);
This defines the viewport rectangle to approximately match the dashed orange rectangle in your sketch. You may need some more math for the values to maintain the aspect ratio, but this shows the fundamental idea.

loading two textures on a GLSurfaceView one as background, one as foreground with alpha

I've set up a GLSurfaceView and want to do this :
load a texture as a background(make it fullscreen without stretching the image)
load another texture with alpha, make it foreground. so the background picture is visible, like applying a fog effect or rain effect. (also fullscreen without stretching the image)
and any info on how to load a texture properly and set it on GLSurfaceView is appreciated.
and I have tried this code to load a texture on my GLSurfaceView but didn't work for me :
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.content.res.Resources;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.drawable.Drawable;
import android.opengl.GLUtils;
public class TextureHelper {
/**
* Helper method to load a GL texture from a bitmap
*
* Note that the caller should "recycle" the bitmap
*
* #return the ID of the texture returned from glGenTextures()
*/
public static int loadGLTextureFromBitmap( Bitmap bitmap, GL10 gl ) {
// Generate one texture pointer
int[] textureIds = new int[1];
gl.glGenTextures( 1, textureIds, 0 );
// bind this texture
gl.glBindTexture( GL10.GL_TEXTURE_2D, textureIds[0] );
// Create Nearest Filtered Texture
gl.glTexParameterf( GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR );
gl.glTexParameterf( GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR );
gl.glTexParameterf( GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT );
gl.glTexParameterf( GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT );
// Use the Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D( GL10.GL_TEXTURE_2D, 0, bitmap, 0 );
return textureIds[0];
}
/**
* Create a texture from a given resource
*
* #param resourceID the ID of the resource to be loaded
* #param scaleToPO2 determines whether the image should be scaled up to the next highest power
* of two, or whether it should be "inset" into such an image. Having textures that are
* dimensions of some power-of-two is critical for performance in opengl.
*
* #return the ID of the texture returned from glGenTextures()
*/
public static int loadGLTextureFromResource( int resourceID, Context context, boolean scaleToPO2 , GL10 gl ) {
// pull in the resource
Bitmap bitmap = null;
Resources resources = context.getResources();
Drawable image = resources.getDrawable( resourceID );
float density = resources.getDisplayMetrics().density;
int originalWidth = (int)(image.getIntrinsicWidth() / density);
int originalHeight = (int)(image.getIntrinsicHeight() / density);
int powWidth = getNextHighestPO2( originalWidth );
int powHeight = getNextHighestPO2( originalHeight );
if ( scaleToPO2 ) {
image.setBounds( 0, 0, powWidth, powHeight );
} else {
image.setBounds( 0, 0, originalWidth, originalHeight );
}
// Create an empty, mutable bitmap
bitmap = Bitmap.createBitmap( powWidth, powHeight, Bitmap.Config.ARGB_4444 );
// get a canvas to paint over the bitmap
Canvas canvas = new Canvas( bitmap );
bitmap.eraseColor(0);
image.draw( canvas ); // draw the image onto our bitmap
int textureId = loadGLTextureFromBitmap( bitmap , gl );
bitmap.recycle();
return textureId;
}
/**
* Calculates the next highest power of two for a given integer.
*
* #param n the number
* #return a power of two equal to or higher than n
*/
public static int getNextHighestPO2( int n ) {
n -= 1;
n = n | (n >> 1);
n = n | (n >> 2);
n = n | (n >> 4);
n = n | (n >> 8);
n = n | (n >> 16);
n = n | (n >> 32);
return n + 1;
}
}
this is the onCreate code in my activity :
private GLSurfaceView mGLView;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
GLES20Renderer.context = this;
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
if (hasGLES20()) {
mGLView = new GLSurfaceView(this);
mGLView.setEGLContextClientVersion(2);
mGLView.setPreserveEGLContextOnPause(true);
mGLView.setRenderer(new GLES20Renderer());
} else {
mGLView = new GLSurfaceView(this);
mGLView.setEGLContextClientVersion(1);
mGLView.setPreserveEGLContextOnPause(true);
mGLView.setRenderer(new GLES20Renderer());
Log.i("S"," does not support open gl 2");
// you gotta get a phone that supports Open GL 2.0
}
setContentView(mGLView);
}
and this is the code for the GLSurfaceView.Renderer :
#Override
public void onDrawFrame(boolean firstDraw) {
int textureid = TextureHelper.loadGLTextureFromResource(R.drawable.ic_launcher, context, true, gl);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glClearColor(0.0f, 0.4f, 0.4f, 1.0f);
}
and I think I missed out some code in on draw part maybe :( and I have no idea how to load the other texture with alpha enabled...
Sorry if I am wrong, but I suppose that you are beginner in OpenGL. So, I am going to try help you in your learning.
First, I recommend you using OpenGLES 2.0 only, because it is supported by Android 2.2 (API level 8) and higher. Read Android reference and Tutorial. Today, All android version is greater than API level 8.
For drawing anything with OpenGl ES 2.0, you have to use GLES20.glDrawArrays(..) or GLES20.glDrawElements(..). Read OpenGl ES 2.0 Reference. I didn't see you are call one of theses methods.
Answer your question:
There is a lot of ways to control alpha of one image. The easy way is create your own Fragment Shader to do it. Read this great tutorial to learn the basics step of creating shaders.

Convert OpenGL ES 2.0 rendered texture to bitmap and back

I'd like to blur the rendered texture with RenderScript and for it I need to convert it to bitmap format and to use it I need to convert it back to OpenGL texture.
The render to texture is working. The problem has to be somewhere here but I don't understand why it doesn't work. I'm getting a black screen
public void renderToTexture(){
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fb[0]);
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, renderTex[0], 0);
// attach render buffer as depth buffer
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT, GLES20.GL_RENDERBUFFER, depthRb[0]);
// check status
int status = GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
drawRender();
Bitmap bitmap = SavePixels(0,0,texW,texH);
//blur bitmap and get back a bluredBitmap not yet implemented
texture = TextureHelper.loadTexture(bluredBitmap, 128);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture);
drawRender2();
}
To create a bitmap I read pixels from the framebuffer because didn't find any other way to do it but I'm open for other methods
public static Bitmap SavePixels(int x, int y, int w, int h)
{
int b[]=new int[w*(y+h)];
int bt[]=new int[w*h];
IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i<h; i++, k++)
{
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-k-1)*w+j]=pix1;
}
}
Bitmap sb=Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
Here is the bitmap to texture code:
public static int loadTexture(final Bitmap pics, int size)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
// Read in the resource
final Bitmap bitmap = pics;
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glEnable(GLES20.GL_BLEND);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
You can see Android MediaCodec stuff, also can directly see ExtractMpegFramesTest_egl14.java, and the code snippet is here:
[/**
* Saves][1] the current frame to disk as a PNG image.
*/
public void saveFrame(String filename) throws IOException {
// glReadPixels gives us a ByteBuffer filled with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). To use the Bitmap
// constructor that takes an int[] array with pixel data, we need an int[] filled
// with little-endian ARGB data.
//
// If we implement this as a series of buf.get() calls, we can spend 2.5 seconds just
// copying data around for a 720p frame. It's better to do a bulk get() and then
// rearrange the data in memory. (For comparison, the PNG compress takes about 500ms
// for a trivial frame.)
//
// So... we set the ByteBuffer to little-endian, which should turn the bulk IntBuffer
// get() into a straight memcpy on most Android devices. Our ints will hold ABGR data.
// Swapping B and R gives us ARGB. We need about 30ms for the bulk get(), and another
// 270ms for the color swap.
//
// We can avoid the costly B/R swap here if we do it in the fragment shader (see
// http://stackoverflow.com/questions/21634450/ ).
//
// Having said all that... it turns out that the Bitmap#copyPixelsFromBuffer()
// method wants RGBA pixels, not ARGB, so if we create an empty bitmap and then
// copy pixel data in we can avoid the swap issue entirely, and just copy straight
// into the Bitmap from the ByteBuffer.
//
// Making this even more interesting is the upside-down nature of GL, which means
// our output will look upside-down relative to what appears on screen if the
// typical GL conventions are used. (For ExtractMpegFrameTest, we avoid the issue
// by inverting the frame when we render it.)
//
// Allocating large buffers is expensive, so we really want mPixelBuf to be
// allocated ahead of time if possible. We still get some allocations from the
// Bitmap / PNG creation.
mPixelBuf.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
mPixelBuf);
BufferedOutputStream bos = null;
try {
bos = new BufferedOutputStream(new FileOutputStream(filename));
Bitmap bmp = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
mPixelBuf.rewind();
bmp.copyPixelsFromBuffer(mPixelBuf);
bmp.compress(Bitmap.CompressFormat.PNG, 90, bos);
bmp.recycle();
} finally {
if (bos != null) bos.close();
}
if (VERBOSE) {
Log.d(TAG, "Saved " + mWidth + "x" + mHeight + " frame as '" + filename + "'");
}
}
You should have used:
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);
Your for loop is supposed to convert RGBA to ARGB_8888

Android OpenGL ES rendering subdivided Mesh

I'm trying to render a subdivided mesh with a displacement texture on it and a color texture. To do so I go through every pixel, create a vertex for it, and move that vertex according to a black and white image I have. The problem is that when I render it, I get something that looks a bit like TV snow.
Here's the relevant code:
public Plane(Bitmap image, Bitmap depth)
{
this.image = image; //color image
this.depth = depth; //BW depth image
this.w = image.getWidth();
this.h = image.getHeight();
vertexCoords = vertexArray(); //places vertices in 3d
drawOrder = orderArray(); //sets the draw order
colorCoords = colorArray(); //sets color per vertex
ByteBuffer bb = ByteBuffer.allocateDirect(vertexCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(vertexCoords);
vertexBuffer.position(0);
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 4);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
ByteBuffer cbb = ByteBuffer.allocateDirect(colorCoords.length * 4);
cbb.order(ByteOrder.nativeOrder());
colorBuffer = cbb.asFloatBuffer();
colorBuffer.put(colorCoords);
colorBuffer.position(0);
}
public void draw(GL10 gl) {
// Counter-clockwise winding.
gl.glFrontFace(GL10.GL_CCW);
// Enable face culling.
gl.glEnable(GL10.GL_CULL_FACE);
// What faces to remove with the face culling.
gl.glCullFace(GL10.GL_BACK);
// Enabled the vertices buffer for writing and to be used during
// rendering.
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
// Enable the color array buffer to be used during rendering.
gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // NEW LINE ADDED.
// Point out the where the color buffer is.
gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer); // NEW LINE ADDED.
gl.glDrawElements(GL10.GL_TRIANGLES, drawOrder.length,
GL10.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
// Disable face culling.
gl.glDisable(GL10.GL_CULL_FACE);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
}
What can I do to actually view the model, instead of this snow thing? The patterns change if I turn my screen on and off, and they sometimes change randomly. It seems that the colors present in the original bitmap are also present in the snow (the snow color changes with different pictures), so I know I'm doing something right, I just don't know what's wrong here.
EDIT: here's the code for vertexArray()
public float[] vertexArray()
{
int totalPoints = w*h;
float[] arr = new float[totalPoints*3];
int i = 0;
for(int y = 0; y<h; y++)
{
for(int x = 0; x<w; x++)
{
arr[i] = x * 0.01f;
arr[i+1] = y * 0.01f;
arr[i+2] = 1.0f;//getDepth(x,y);
i+=3;
}
}
return arr;
}

Android texture only showing solid color

I am trying to display a single texture on a quad.
I had a working VertexObject, which drew a square(or any geometric object) fine. Now I tried expanding it to handle textures too, and the textures doesn't work. I only see the quad in one solid color.
The coordinate data is in an arrayList:
/*the vertices' coordinates*/
public int coordCount = 0;
/*float array of 3(x,y,z)*/
public ArrayList<Float> coordList = new ArrayList<Float>(coordCount);
/*the coordinates' indexes(if used)*/
/*maximum limit:32767*/
private int orderCount = 0;
private ArrayList<Short> orderList = new ArrayList<Short>(orderCount);
/*textures*/
public boolean textured;
private boolean textureIsReady;
private ArrayList<Float> textureList = new ArrayList<Float>(coordCount);
private Bitmap bitmap; //the image to be displayed
private int textures[]; //the textures' ids
The buffers are initialized in the following function:
/*Drawing is based on the buffers*/
public void refreshBuffers(){
/*Coordinates' List*/
float coords[] = new float[coordList.size()];
for(int i=0;i<coordList.size();i++){
coords[i]= coordList.get(i);
}
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (number of coordinate values * 4 bytes per float)
coords.length * 4);
// use the device hardware's native byte order
bb.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
vertexBuffer = bb.asFloatBuffer();
// add the coordinates to the FloatBuffer
vertexBuffer.put(coords);
// set the buffer to read the first coordinate
vertexBuffer.position(0);
/*Index List*/
short order[] = new short[(short)orderList.size()];
for(int i=0;i<order.length;i++){
order[i] = (short) orderList.get(i);
}
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
order.length * 2);
dlb.order(ByteOrder.nativeOrder());
orderBuffer = dlb.asShortBuffer();
orderBuffer.put(order);
orderBuffer.position(0);
/*texture list*/
if(textured){
float textureCoords[] = new float[textureList.size()];
for(int i=0;i<textureList.size();i++){
textureCoords[i] = textureList.get(i);
}
ByteBuffer byteBuf = ByteBuffer.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
textureBuffer = byteBuf.asFloatBuffer();
textureBuffer.put(textureCoords);
textureBuffer.position(0);
}
}
I load the image into the object with the following code:
public void initTexture(GL10 gl, Bitmap inBitmap){
bitmap = inBitmap;
loadTexture(gl);
textureIsReady = true;
}
/*http://www.jayway.com/2010/12/30/opengl-es-tutorial-for-android-part-vi-textures/*/
public void loadTexture(GL10 gl){
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterx(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
/*bind bitmap to texture*/
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
And the drawing happens based on this code:
public void draw(GL10 gl){
if(textured && textureIsReady){
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
//loadTexture(gl);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0,
vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0,
textureBuffer);
}else{
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glColor4f(color[0], color[1], color[2], color[3]);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0,
vertexBuffer);
}
if(!indexed)gl.glDrawArrays(drawMode, 0, coordCount);
else gl.glDrawElements(drawMode, orderCount, GL10.GL_UNSIGNED_SHORT, orderBuffer);
if(textured && textureIsReady){
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDisable(GL10.GL_TEXTURE_2D);
}else{
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
}
}
The initialization is as follows:
pic = new VertexObject();
pic.indexed = true;
pic.textured = true;
pic.initTexture(gl,MainActivity.bp);
pic.color[0] = 0.0f;
pic.color[1] = 0.0f;
pic.color[2] = 0.0f;
float inputVertex[] = {2.0f,2.0f,0.0f};
float inputTexture[] = {0.0f,0.0f};
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 2.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 0.0f;
inputTexture[0] = 1.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 8.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 1.0f;
inputTexture[0] = 1.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
inputVertex[0] = 8.0f;
inputVertex[1] = 2.0f;
inputTexture[0] = 1.0f;
inputTexture[0] = 0.0f;
pic.addTexturedVertex(inputVertex,inputTexture);
pic.addIndex((short)0);
pic.addIndex((short)1);
pic.addIndex((short)2);
pic.addIndex((short)0);
pic.addIndex((short)2);
pic.addIndex((short)3);
The coordinates are just simply added to the arrayList, and then I refresh the buffers.
The bitmap is valid, because it is showing up on an imageView.
The image is a png file with the size of 128x128 in the drawable folder.
For what I gathered the image is getting to the vertexObject, but something isn't right with the texture mapping. Any pointers on what am I doing wrong?
Okay, I got it!
I downloaded a working example from the internet and rewrote it, to resemble the object(presented above) step by step. I observed if it works on every step. Turns out, the problem isn't in the graphical part, because the object worked in another context with different coordinates.
Long story short:
I got the texture UV mapping wrong!
That's why I got the solid color, the texture was loaded, but the UV mapping wasn't correct.
Short story long:
At the lines
inputVertex[0] = 2.0f;
inputVertex[1] = 8.0f;
inputTexture[0] = 0.0f;
inputTexture[0] = 1.0f;
The indexing was wrong as only the first element of inputTexture was updated only. There might have been some additional errors regarding the sizes of the different array describing the vertex coordinates, but rewriting on the linked example fixed the problem, and it produced a mroe concise code.

Categories

Resources