i can use the follow code to achieve convert OpenGL RGBA int array to android ARGB int array.
int b[]=new int[(int) (w*h)];
int bt[]=new int[(int) (w*h)];
IntBuffer buffer=IntBuffer.wrap(b);
buffer.position(0);
GLES20.glReadPixels(0, 0, w, h,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_BYTE, buffer);
for(int i=0; i<h; i++)
{
//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-i-1)*w+j]=pix1;
}
}
through i can use convert int array to byte array to achieve my goal,but efficiency is little low,so i want to use the follow code to achieve the same goal:
final ByteBuffer rgbaData = ByteBuffer.allocateDirect(WIDTH * HEIGHT * 4);
GLES20.glReadPixels(0, 0, WIDTH, HEIGHT, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, rgbaData),
Related
I've spend a lot of time trying to figure this out but can't see what I am doing wrong.
This is my original image
Image recaptured 5 times
Recapturing the image multiple times clearly shows that there is something not right. Capturing it once is just ok but twice is enough to clearly see the difference.
I found these similar issues on stackoverflow:
Bitmap quality using glReadPixels with frame buffer objects
Extract pixels from TextureSurface using glReadPixels resulting in bad image Bitmap
(sorry limited to links I can add)
Unfortunately none of the proposed suggestions/solutions fixed my issue.
This is my code:
private Bitmap createSnapshot (int x, int y, int w, int h) {
int bitmapBuffer[] = new int[w * h];
int bitmapSource[] = new int[w * h];
IntBuffer intBuffer = IntBuffer.wrap(bitmapBuffer);
intBuffer.position(0);
try {
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, intBuffer);
int offset1, offset2;
for (int i = 0; i < h; i++) {
offset1 = i * w;
offset2 = (h - i - 1) * w;
for (int j = 0; j < w; j++) {
int texturePixel = bitmapBuffer[offset1 + j];
int blue = (texturePixel >> 16) & 0xff;
int red = (texturePixel << 16) & 0x00ff0000;
int pixel = (texturePixel & 0xff00ff00) | red | blue;
bitmapSource[offset2 + j] = pixel;
}
}
} catch (GLException e) {
return null;
}
return Bitmap.createBitmap(bitmapSource, w, h, Bitmap.Config.ARGB_8888);
}
I'm using OpenGL 2. For bitmap compression I am using PNG. Tested it using JPEG (quality 100) and the result is the same but slightly worse.
There is also a slight yellowish tint added to the image.
I have a bitmap(which can be converted to a ByteBuffer). I want to upload all of its 6 faces by offsets to the GPU in OpenGL. When I do the following, the app crashes with OpenGL giving a memory violation.
Here bitmap is a byte array byte[]
for (int i=0 ; i<6 ; i++) {
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GLES20.GL_RGBA,
side,
side,
0,
GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE,
ByteBuffer.wrap(bitmap, length / 6 * i, side * side * 4));
}
But when I copy the array and then upload to the GPU like this(Here bitmap is of type Bitmap):
int numBytes = bitmap.getByteCount();
ByteBuffer pixels = ByteBuffer.allocate(numBytes);
bitmap.copyPixelsToBuffer(pixels);
for (int i=0 ; i<6 ; i++) {
Log.d("aakash", String.valueOf(numBytes / 6 * i));
byte[] arr = Arrays.copyOfRange(pixels.array(), numBytes / 6 * i, numBytes / 6 * (i+1));
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GLES20.GL_RGBA,
bitmap.getWidth(),
bitmap.getHeight() / 6,
0,
GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE,
ByteBuffer.wrap(arr));
}
I get the cubemap correctly rendered.
What am I doing wrong in the first one? I want to avoid copying the array to upload parts of it to the GPU.
I can assure that the size and the mathematical calculations are correct.
To avoid memory violation just replace
ByteBuffer pixels = ByteBuffer.allocate(numBytes);
with
ByteBuffer pixels = ByteBuffer.allocateDirect(numBytes);
But you don't need ByteBuffer for simple side texture loading
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X, R.raw.lake2_rt);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_X, R.raw.lake2_lf);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_Y, R.raw.lake2_up);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, R.raw.lake2_dn);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_Z, R.raw.lake2_bk);
loadSideTexture(context, GLES20.GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, R.raw.lake2_ft);
private void loadSideTexture(Context context, int target, #RawRes int resID) {
final Bitmap bitmap = BitmapFactory.decodeStream(context.getResources().openRawResource(resID));
GLUtils.texImage2D(target, 0, bitmap, 0);
bitmap.recycle();
}
I am trying to set a region of pixels in a mutable bitmap a different color in my android app. Unfortunately I cannot get setPixels() to work properly. I am constantly getting ArrayOutOfBoundsExceptions. I think it may have something to do with the stride, but I'm really not sure. That is the only parameter that I still don't understand. The only other post I have seen on setPixels (not setPixel) is here: drawBitmap() and setPixels(): what's the stride? and it did not help me. I tried setting the stride as 0, as the width of the bitmap, as the width of the bitmap - the area i'm trying to draw and it still crashes. Here is my code:
public void updateBitmap(byte[] buf, int offset, int x, int y, int width, int height) {
// transform byte[] to int[]
IntBuffer intBuf = ByteBuffer.wrap(buf).asIntBuffer();
int[] intarray = new int[intBuf.remaining()];
intBuf.get(intarray);
int stride = ??????
screenBitmap.setPixels(intarray, offset, stride, x, y, width, height); // crash here
My bitmap is mutable, so I know that is not the problem. I am also certain that my byte array is being properly converted to an integer array. But I keep getting ArrayOutOfBoundsExceptions and I don't understand why. Please help me figure this out
EDIT -
here is how I construct the fake input:
int width = 1300;
int height = 700;
byte[] buf = new byte[width * height * 4 * 4]; // adding another * 4 here seems to work... why?
for (int i = 0; i < width * height * 4 * 4; i+=4) {
buf[i] = (byte)255;
buf[i + 1] = 3;
buf[i + 2] = (byte)255;
buf[i + 3] = 3;
}
//(byte[] buf, int offset, int x, int y, int width, int height) - for reference
siv.updateBitmap(buf, 0, 0, 0, width, height);
So the width and height are the correct amount of ints (at least it should be).
EDIT2 - here is the code for the original creation of screenBitmap:
public Bitmap createABitmap() {
int w = 1366;
int h = 766;
byte[] buf = new byte[h * w * 4];
for (int i = 0; i < h * w * 4;i+=4) {
buf[i] = (byte)255;
buf[i+1] = (byte)255;
buf[i+2] = 0;
buf[i+3] = 0;
}
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
IntBuffer intBuf = ByteBuffer.wrap(buf).asIntBuffer();
int[] intarray = new int[intBuf.remaining()];
intBuf.get(intarray);
Bitmap bmp = Bitmap.createBitmap(metrics, w, h, itmap.Config.valueOf("ARGB_8888"));
bmp.setPixels(intarray, 0, w, 0, 0, w, h);
return bmp;
}
It seems to work in this instance, not sure what the difference is
Short
I think, stride is the width of the image contained in pixels.
long
What I take from the documentation, the algorithm should work somehow like this:
public void setPixels (int[] pixels,
int offset,
int stride,
int x,
int y,
int width,
int height)
for (int rowIndex = 0; rowIndex < height; rowIndex++) {
int rowStart = offset + stride * rowIndex; // row position in pixels
int bitmapY = y + rowIndex; // y position in the bitmap
for (int columnIndex = 0; columnIndex < width; columnIndex++) {
int bitmapX = x + columnIndex; // x position in the bitmap
int pixelsIndex = rowStart + columnIndex; // position in pixels
setPixel(bitmapX, bitmapY, pixels[pixelsIndex]);
}
}
}
At least this is what I would do with these arguments because is allows you to have one pixels as source and cut out different images in different sizes.
Feel free to correct the algorithm if I am wrong.
So, say you have an image with pixelsWidth and pixelsHeight in pixels.
Then you want to copy a section at pixelsX and pixelsY with width and height to a bitmap at position bitmapX and bitmapY.
This should be the call:
bitmap.setPixels(
pixelsWidth * pixelsY + pixelsX, // offset
pixelsWidth, // stride
bitmapX, // x
bitmapY, // y
width,
height)
probably it should be:
screenBitmap.setPixels(intarray, 0, width / 4, x, y, width / 4, height);
because you have converted byte to int. your error is ArrayOutOfBoundsExceptions. check whether the size intBuf.remaining() = width * height / 4.
If you're trying to draw on a bitmap, your best approach would be to abandon this method and use a canvas instead:
Canvas canvas = new Canvas(screenBitmap);
You can then draw specific points (if you want to draw a pixel), or other shapes like rectangles, circles, etc:
canvas.drawPoint(x, y, paint);
etc
Hope this helps.
I have an application that uses GLSurfaceView to do some 3-D goodness in Android. I'd like the user to be able to take a screenshot. I think this snippet of code should be storing the pixel colors and storing them inside a bitmap. However, does anyone know how to access the gl10 element that the screen is using, so I can feed it into this function?
public static Bitmap savePixelsOnScreen(int x, int y, int width, int height, GL10 gl){
int b[]=new int[w*(y+h)];
int bt[]=new int[w*h];
IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
//gl.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
gl.glReadPixels(x, 0, w, y+h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i<h; i++, k++)
{//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-k-1)*w+j]=pix1;
}
}
Bitmap sb=Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
return bitmap;
I had tried to do this from within the Activity that calls the GLSurfaceView:
EGL10 egl = (EGL10)EGLContext.getEGL();
GL10 gl = (GL10)egl.eglGetCurrentContext().getGL();
But every element of the int buffer is zero so I think this is not correct.
put the code under draw(GL10 gl) method of your CClayer/CCNode class and also maintain boolean variable to draw it only once. Draw method will be called 60+ times per second.
I'd like to get access to the main (OpenGL) screen in Android to implement some overlay 3D effects.
Is it possible to do so?
If yes, how can I do it?
When amending this context, my application should be a service, right?
You cannot get access to the framebuffer, for obvious security reasons.
What you will probably want to do is research the glReadPixels() function. I ran a test where I had the screen split with a glsurfaceview and image view and wanted to see if I could grab the pixels from the glview and create a Bitmap and then apply that to the ImageView. After some research, I found using glReadPixels() works, but you have to tranform the pixels before using them for an android bitmap. This is the method I ended up using. I'm confident that I found it exactly this way on another forum.
public Bitmap SaveGLPixels(int x, int y, int w, int h, GL10 gl)
{
int b[]=new int[w*h];
int bt[]=new int[w*h];
IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
gl.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for(int i=0; i<h; i++)
{//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-i-1)*w+j]=pix1;
}
}
Bitmap.Config bconfig = Bitmap.Config.RGB_565;
Bitmap sb=Bitmap.createBitmap(bt, w, h, bconfig);
return sb;
}