Memory violation in Android OpenGL GLES20 with glTexImage2D - android

I have a bitmap(which can be converted to a ByteBuffer). I want to upload all of its 6 faces by offsets to the GPU in OpenGL. When I do the following, the app crashes with OpenGL giving a memory violation.
Here bitmap is a byte array byte[]
for (int i=0 ; i<6 ; i++) {
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GLES20.GL_RGBA,
side,
side,
0,
GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE,
ByteBuffer.wrap(bitmap, length / 6 * i, side * side * 4));
}
But when I copy the array and then upload to the GPU like this(Here bitmap is of type Bitmap):
int numBytes = bitmap.getByteCount();
ByteBuffer pixels = ByteBuffer.allocate(numBytes);
bitmap.copyPixelsToBuffer(pixels);
for (int i=0 ; i<6 ; i++) {
Log.d("aakash", String.valueOf(numBytes / 6 * i));
byte[] arr = Arrays.copyOfRange(pixels.array(), numBytes / 6 * i, numBytes / 6 * (i+1));
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GLES20.GL_RGBA,
bitmap.getWidth(),
bitmap.getHeight() / 6,
0,
GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE,
ByteBuffer.wrap(arr));
}
I get the cubemap correctly rendered.
What am I doing wrong in the first one? I want to avoid copying the array to upload parts of it to the GPU.
I can assure that the size and the mathematical calculations are correct.

To avoid memory violation just replace
ByteBuffer pixels = ByteBuffer.allocate(numBytes);
with
ByteBuffer pixels = ByteBuffer.allocateDirect(numBytes);
But you don't need ByteBuffer for simple side texture loading
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X, R.raw.lake2_rt);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_X, R.raw.lake2_lf);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_Y, R.raw.lake2_up);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, R.raw.lake2_dn);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_Z, R.raw.lake2_bk);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, R.raw.lake2_ft);
private void loadSideTexture(Context context, int target, #RawRes int resID) {
final Bitmap bitmap = BitmapFactory.decodeStream(context.getResources().openRawResource(resID));
GLUtils.texImage2D(target, 0, bitmap, 0);
bitmap.recycle();
}

Related

convert OpenGL RGBA byte array to android ARGB byte array

i can use the follow code to achieve convert OpenGL RGBA int array to android ARGB int array.
int b[]=new int[(int) (w*h)];
int bt[]=new int[(int) (w*h)];
IntBuffer buffer=IntBuffer.wrap(b);
buffer.position(0);
GLES20.glReadPixels(0, 0, w, h,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_BYTE, buffer);
for(int i=0; i<h; i++)
{
//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-i-1)*w+j]=pix1;
}
}
through i can use convert int array to byte array to achieve my goal,but efficiency is little low,so i want to use the follow code to achieve the same goal:
final ByteBuffer rgbaData = ByteBuffer.allocateDirect(WIDTH * HEIGHT * 4);
GLES20.glReadPixels(0, 0, WIDTH, HEIGHT, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, rgbaData),

Load huge texture in parts in OpenGL ES 2.0

I'd like to load a huge bitmap as a texture into the graphics card's memory on Android, in OpenGL ES 2.0, to be used as a texture atlas, in the biggest size possible. My device has a maximum texture size of 8192x8192.
I know that I can load a bitmap as a texture the following way:
// create bitmap
Bitmap bitmap = Bitmap.createBitmap(8192, 8192, Bitmap.Config.ARGB_8888);
{ // draw something
Canvas c = new Canvas(bitmap);
Paint p = new Paint(Paint.ANTI_ALIAS_FLAG);
p.setColor(0xFFFF0000);
c.drawCircle(4096, 4096, 4096, p);
}
// load as a texture
GLUtils.texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
However, (not surprisingly) I get a java.lang.OutOfMemoryError when trying to create a bitmap of this size.
Is it possible to load it in parts? As it's a texture atlas, it could be assembled from smaller bitmaps. I looked at the texSubImage2D function, but I don't understand where you would initialize the full-sized texture, or provide the size of the full texture beforehand.
On the GL side you need to allocate the full storage, and then patch it.
Allocate storage using glTexImage2D() with a null value for the data parameter. Upload patches using glTexSubImage2D().
Note that this still requires 256MB of memory, so on many budget devices you'll still get an OOM ...
Based on solidpixel's answer, this is a code that does the job:
GLES20.glTexImage2D ( GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 8192, 8192, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
Bitmap bitmap = Bitmap.createBitmap(1024, 1024, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bitmap);
Paint p = new Paint(Paint.ANTI_ALIAS_FLAG);
for (int i = 0; i < 8; ++i) {
for (int j = 0; j < 8; ++j) {
// clear the bitmap
c.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
{ // draw something
p.setARGB(255, 32 * i, 32 * j, 255 - 32 * j);
c.drawCircle(512, 512, 512, p);
}
// load as part of a texture
GLUtils.texSubImage2D(GL_TEXTURE_2D, 0, i * 1024, j * 1024, bitmap);
}
}
Here the texture is assembled from 64, 1024x1024-sized bitmaps.

Extract pixels from TextureSurface using glReadPixels resulting in bad image Bitmap

I am trying to send bitmap images every few seconds from the SurfaceTexture view in Android. I read the pixels using glReadPixels() and the image I get is something like
.
My code looks something like this:
int size = this.width * this.height;
ByteBuffer bb = ByteBuffer.allocateDirect(size * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, width, height, GL10.GL_RGB, GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[size];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for(int i = 0; i < size; i++) {
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bm = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bm.setPixels(pixelsBuffer, size - width, -width, 0, 0, width, height);
if(now - init > 5000) {
init = now;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] b = baos.toByteArray();
String encodedImage = Base64.encodeToString(b, Base64.DEFAULT);
}
note: now and init is just a long with currentTimeMillis() function.
Does anyone knows what's wrong? Or is there any better way to convert the image to base64 String because I need to send this to a server.
The thing of which I can be confident:
gl.glReadPixels(0, 0, width, height, GL10.GL_RGB, GL10.GL_UNSIGNED_BYTE, bb);
Probably wants to be:
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
... otherwise each pixel is being returned as three bytes, and then probably being spaced in a manner you don't want unless you've also adjusted GL_PACK_ALIGNMENT. Your code otherwise seems to assume four bytes.
(EDIT: this is one of the "Common Mistakes: Texture upload and pixel reads" per the OpenGL.org wiki)
I'm such a Java/Android dunce that I'm taking it as given that you're confident in creating a bitmap with a 16-bit Bitmap.Config.RGB_565 format but then posting 32-bit data via setPixels. It seems a bit weird though — especially given the depth of JPEG wouldn't Bitmap.Config.ARGB_8888 be more appropriate?

The orientation issue when save the EGL surface to a file in Android grafika open source code

I use the saveFrame method in grafika. But I found there is orientation issue.
The left one is from saveFrame and the right one is what i see.
The code as following:
/**
* Saves the EGL surface to a file.
* <p>
* Expects that this object's EGL surface is current.
*/
public void saveFrame(File file) throws IOException {
if (!mEglCore.isCurrent(mEGLSurface)) {
throw new RuntimeException("Expected EGL context/surface is not current");
}
// glReadPixels fills in a "direct" ByteBuffer with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). While the Bitmap
// constructor that takes an int[] wants little-endian ARGB (blue/red swapped), the
// Bitmap "copy pixels" method wants the same format GL provides.
//
// Ideally we'd have some way to re-use the ByteBuffer, especially if we're calling
// here often.
//
// Making this even more interesting is the upside-down nature of GL, which means
// our output will look upside down relative to what appears on screen if the
// typical GL conventions are used.
String filename = file.toString();
int width = getWidth();
int height = getHeight();
ByteBuffer buf = ByteBuffer.allocateDirect(width * height * 4);
buf.order(ByteOrder.LITTLE_ENDIAN);
GLES20.glReadPixels(0, 0, width, height,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);
GlUtil.checkGlError("glReadPixels");
buf.rewind();
BufferedOutputStream bos = null;
try {
bos = new BufferedOutputStream(new FileOutputStream(filename));
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bmp.copyPixelsFromBuffer(buf);
bmp.compress(Bitmap.CompressFormat.PNG, 90, bos);
bmp.recycle();
} finally {
if (bos != null) bos.close();
}
Log.d(TAG, "Saved " + width + "x" + height + " frame as '" + filename + "'");
}
So how to deal with the orientation issue ?
Using the following code to save the problem:
IntBuffer ib = IntBuffer.allocate(width * height);
IntBuffer ibt = IntBuffer.allocate(width * height);
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
// Convert upside down mirror-reversed image to right-side up normal image.
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
ibt.put((height - i - 1) * width + j, ib.get(i * width + j));
}
}
Bitmap bitmap = Bitmap.createBitmap(width, height,Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(ibt);

Android 4.3 PBO not working

I am using PBO to take screenshot. However, the result image is all black. It works perfectly fine without PBO. Is there any thing that I need to take care before doing this ?
I even tried by rendering to a FBO and then use GLES30.glReadBuffer(GLES30.GL_COLOR_ATTACHMENT0), no hope
public void SetupPBO(){
GLES30.glGenBuffers(1, pbuffers, 0);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, pbuffers[0]);
int size = (int)this.mScreenHeight * (int)this.mScreenWidth * 4;
GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, size, null, GLES30.GL_DYNAMIC_READ);
checkGlError("glReadBuffer");
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
}
private void Render(float[] m) {
.......//Normal render logic
exportBitmap();
}
private void exportBitmap() {
int screenshotSize = (int)this.mScreenWidth * (int)this.mScreenHeight;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
// set the target framebuffer to read
GLES30.glReadBuffer(GLES30.GL_FRONT);
checkGlError("glReadBuffer");
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, pbuffers[0]);
GLES30.glReadPixels(0, 0, (int)mScreenWidth, (int)mScreenHeight, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb); //<------ not working ?????
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the
// red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00))
| ((pixelsBuffer[i] & 0x000000ff) << 16)
| ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap((int)mScreenWidth, (int)mScreenHeight, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize - (int)mScreenWidth, -(int)mScreenWidth, 0, 0, (int)mScreenWidth, (int)mScreenHeight);
SaveBitmap(bitmap);
}
GLES30.glReadPixels(0, 0, (int)mScreenWidth, (int)mScreenHeight, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
bb is interpret as an offset in your PBO. Thus you're writing out of buffer (On some drivers this code cause crash). You should pass 0 instead of bb. To retrive the data from PBO use glMapBuffer.

Categories

Resources