Extract pixels from TextureSurface using glReadPixels resulting in bad image Bitmap - android

I am trying to send bitmap images every few seconds from the SurfaceTexture view in Android. I read the pixels using glReadPixels() and the image I get is something like
.
My code looks something like this:
int size = this.width * this.height;
ByteBuffer bb = ByteBuffer.allocateDirect(size * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, width, height, GL10.GL_RGB, GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[size];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for(int i = 0; i < size; i++) {
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bm = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bm.setPixels(pixelsBuffer, size - width, -width, 0, 0, width, height);
if(now - init > 5000) {
init = now;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] b = baos.toByteArray();
String encodedImage = Base64.encodeToString(b, Base64.DEFAULT);
}
note: now and init is just a long with currentTimeMillis() function.
Does anyone knows what's wrong? Or is there any better way to convert the image to base64 String because I need to send this to a server.

The thing of which I can be confident:
gl.glReadPixels(0, 0, width, height, GL10.GL_RGB, GL10.GL_UNSIGNED_BYTE, bb);
Probably wants to be:
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
... otherwise each pixel is being returned as three bytes, and then probably being spaced in a manner you don't want unless you've also adjusted GL_PACK_ALIGNMENT. Your code otherwise seems to assume four bytes.
(EDIT: this is one of the "Common Mistakes: Texture upload and pixel reads" per the OpenGL.org wiki)
I'm such a Java/Android dunce that I'm taking it as given that you're confident in creating a bitmap with a 16-bit Bitmap.Config.RGB_565 format but then posting 32-bit data via setPixels. It seems a bit weird though — especially given the depth of JPEG wouldn't Bitmap.Config.ARGB_8888 be more appropriate?

Related

GLES20.glReadPixel too slow

It is an old question but I would like to have a reply with a code.
The following is too slow for real-time. I intend to use it later with OpenTOK screen sharing. Any fast substitute?
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
final Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - width, -width,
0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
PS: I already tried GL_RGB and GL_BGRA but it is still slow and I get black screen only.
First off, the glReadPixels isn't what is causing your code to slow down. All the allocating of buffers and converting the image to another format is.
Reuse buffers. Allocate them once and reuse them.
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
Not a problem with this bit.
GLES20.glReadPixels(0, 0, width, height, GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
Now you're preparing to convert to another format which has a lot of overhead. Stick with the format you receive. And again, you're allocating buffers instead of reusing.
However, the Bitmap can be allocated once and reused.
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
final Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - width, -width,
0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
You then won't need this unnecessary conversion.
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
The copyPixelsFromBuffer() can then be used as long as your Bitmap is same format.
bitmap.copyPixelsFromBuffer(bb);
Generally Android Bitmaps are the same format as GL_RGBA, so it's unlikely you will need to convert. The above reduces everything down to just a read and copy.

Memory violation in Android OpenGL GLES20 with glTexImage2D

I have a bitmap(which can be converted to a ByteBuffer). I want to upload all of its 6 faces by offsets to the GPU in OpenGL. When I do the following, the app crashes with OpenGL giving a memory violation.
Here bitmap is a byte array byte[]
for (int i=0 ; i<6 ; i++) {
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GLES20.GL_RGBA,
side,
side,
0,
GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE,
ByteBuffer.wrap(bitmap, length / 6 * i, side * side * 4));
}
But when I copy the array and then upload to the GPU like this(Here bitmap is of type Bitmap):
int numBytes = bitmap.getByteCount();
ByteBuffer pixels = ByteBuffer.allocate(numBytes);
bitmap.copyPixelsToBuffer(pixels);
for (int i=0 ; i<6 ; i++) {
Log.d("aakash", String.valueOf(numBytes / 6 * i));
byte[] arr = Arrays.copyOfRange(pixels.array(), numBytes / 6 * i, numBytes / 6 * (i+1));
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GLES20.GL_RGBA,
bitmap.getWidth(),
bitmap.getHeight() / 6,
0,
GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE,
ByteBuffer.wrap(arr));
}
I get the cubemap correctly rendered.
What am I doing wrong in the first one? I want to avoid copying the array to upload parts of it to the GPU.
I can assure that the size and the mathematical calculations are correct.
To avoid memory violation just replace
ByteBuffer pixels = ByteBuffer.allocate(numBytes);
with
ByteBuffer pixels = ByteBuffer.allocateDirect(numBytes);
But you don't need ByteBuffer for simple side texture loading
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X, R.raw.lake2_rt);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_X, R.raw.lake2_lf);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_Y, R.raw.lake2_up);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, R.raw.lake2_dn);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_Z, R.raw.lake2_bk);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, R.raw.lake2_ft);
private void loadSideTexture(Context context, int target, #RawRes int resID) {
final Bitmap bitmap = BitmapFactory.decodeStream(context.getResources().openRawResource(resID));
GLUtils.texImage2D(target, 0, bitmap, 0);
bitmap.recycle();
}

Android 4.3 PBO not working

I am using PBO to take screenshot. However, the result image is all black. It works perfectly fine without PBO. Is there any thing that I need to take care before doing this ?
I even tried by rendering to a FBO and then use GLES30.glReadBuffer(GLES30.GL_COLOR_ATTACHMENT0), no hope
public void SetupPBO(){
GLES30.glGenBuffers(1, pbuffers, 0);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, pbuffers[0]);
int size = (int)this.mScreenHeight * (int)this.mScreenWidth * 4;
GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, size, null, GLES30.GL_DYNAMIC_READ);
checkGlError("glReadBuffer");
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
}
private void Render(float[] m) {
.......//Normal render logic
exportBitmap();
}
private void exportBitmap() {
int screenshotSize = (int)this.mScreenWidth * (int)this.mScreenHeight;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
// set the target framebuffer to read
GLES30.glReadBuffer(GLES30.GL_FRONT);
checkGlError("glReadBuffer");
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, pbuffers[0]);
GLES30.glReadPixels(0, 0, (int)mScreenWidth, (int)mScreenHeight, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb); //<------ not working ?????
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the
// red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00))
| ((pixelsBuffer[i] & 0x000000ff) << 16)
| ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap((int)mScreenWidth, (int)mScreenHeight, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize - (int)mScreenWidth, -(int)mScreenWidth, 0, 0, (int)mScreenWidth, (int)mScreenHeight);
SaveBitmap(bitmap);
}
GLES30.glReadPixels(0, 0, (int)mScreenWidth, (int)mScreenHeight, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
bb is interpret as an offset in your PBO. Thus you're writing out of buffer (On some drivers this code cause crash). You should pass 0 instead of bb. To retrive the data from PBO use glMapBuffer.

How do I load 8-bit binary image data into an OpenGL ES 2 texture on Android

I want to read monochrome image data from disk in a binary format (unsigned byte) and display it as an OpenGL ES 2 texture in Android. I am currently using Eclipse and the AVD emulator.
I am able to read the data from disk using an InputStream, and then convert the byte data to int to allow me to use the createBitmap method.
My hope was to create a monochrome bitmap by using ALPHA_8 as the bitmap format, but if I do that the texture appears as solid black when rendered. If I change the bitmap format to RGB_565 I can see parts of the image but of course the color is all scrambled because it is the wrong data format.
I have tried adding extra parameters to texImage2D() to try to force the texture format and source data type, but Eclipse shows an error if I use any of the opengl texture format codes in the texImage2D arguments.
I'm at a loss, can anyone tell me how to edit this to get a monochrome texture into OpenGL ES?
int w = 640;
int h = 512;
int nP = w * h; //no. of pixels
//load the binary data
byte[] byteArray = new byte[nP];
try {
InputStream fis = mContext.getResources()
.openRawResource(R.raw.testimage); //testimage is a binary file of U8 image data
fis.read(byteArray);
fis.close();
} catch(IOException e) {
// Ignore.
}
System.out.println(byteArray[1]);
//convert byte to int to work with createBitmap (is there a better way to do this?)
int[] intArray = new int[nP];
for (int i=0; i < nP; i++)
{
intArray[i] = byteArray[i];
}
//create bitmap from intArray and send to texture
Bitmap img = Bitmap.createBitmap(intArray, w, h, Bitmap.Config.ALPHA_8);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, img, 0);
img.recycle();
//as the code is the image is black, if I change ALPHA_8 to RGB_565 then I see a corrupted image
Once you have loaded Bitmap into byte array you can also use glTexImage2D directly with your byte array. It would be something along these lines;
byte data[bitmapLength] = your_byte_data;
ByteBuffer buffer = ByteBuffer.allocateDirect(bitmapLength);
buffer.put(data);
buffer.position(0);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE,
bitmapWidth, bitmapHeight, 0, GLES20.GL_LUMINANCE,
GLES20.GL_UNSIGNED_BYTE, buffer);
This should assign each byte value into RGB, same value for each, plus alpha is set to one.
According to the createBitmap docs, that int array is interpreted as an array of Color, which is "(alpha << 24) | (red << 16) | (green << 8) | blue". So, when you're loading those bytes and populating your int array, you're currently putting the data in the blue slot instead of the alpha slot. As such, your alpha values are all zero, which I'd actually expect to result in a clear texture. I believe you want
intArray[i] = byteArray[i] << 24;

Taking screenshot of Android OpenGL

I'm trying to take a screenshot of Android OpenGL.
The code I found is as follows:
nt size = width * height;
ByteBuffer buf = ByteBuffer.allocateDirect(size * 4);
buf.order(ByteOrder.nativeOrder());
glContext.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, buf);
int data[] = new int[size];
buf.asIntBuffer().get(data);
buf = null;
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bitmap.setPixels(data, size-width, -width, 0, 0, width, height);
data = null;
short sdata[] = new short[size];
ShortBuffer sbuf = ShortBuffer.wrap(sdata);
bitmap.copyPixelsToBuffer(sbuf);
for (int i = 0; i < size; ++i) {
//BGR-565 to RGB-565
short v = sdata[i];
sdata[i] = (short) (((v&0x1f) << 11) | (v&0x7e0) | ((v&0xf800) >> 11));
}
sbuf.rewind();
bitmap.copyPixelsFromBuffer(sbuf);
try {
FileOutputStream fos = new FileOutputStream("/sdcard/screeshot.png");
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
} catch (Exception e) {
// handle
}
I tried also a code from that site
link text
In each case the result is a png file which is completely black.
I found there is some problem with glReadPixels method but I don't know how to bypass it.
Sorry for the late response...
In order to perform a correct screenshot You have to put into Your onDrawFrame(GL10 gl) handler the following code:
if(screenshot){
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
//Making created bitmap (from OpenGL points) compatible with Android bitmap
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v&0x1f) << 11) | (v&0x7e0) | ((v&0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
lastScreenshot = bitmap;
screenshot = false;
}
The "screenshot" class field is set to true whenever the user presses the button to create a screenshot
or at any other circumstances You want. Inside the "if" body You may place any screenshot creating code sample You find in th internet - the most important thing is having the current instance of GL10. For example when You just save the GL10 instance to the class variable and then use it outside the event to create the screenshot You'll end up with the completely blank image. That's why You have to take a screenshot inside the OnDrawFrame event handler where the GL10 instance is the current one.
Hope that it helps.
Best regards, Gordon.
Here is the way to do it if you want to preserve the quality (8 bits for every colour channel: red, green, blue and alpha too):
if (this.screenshot) {
int screenshotSize = this.width * this.height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
this.screenshot = false;
}
Got it!
My mistake was that I was remembering GL context in the class variable. In order to take a screenshot I have to use the gl context passed to the OnDraw in the class implementing GLSurfaceView.Renderer interface. I simply use my code in the "if" clause and everything works as expected. Hope that remark would help anyone.
Best regards,
Gordon

Categories

Resources