I am doing the following to get the bitmap image that has been set to a GLSurfaceView object:
glView.setDrawingCacheEnabled(true);
glView.measure(View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED),
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
glView.layout(0, 0, glView.getMeasuredWidth(), glView.getMeasuredHeight());
glView.buildDrawingCache(true);
Bitmap tmpbm = Bitmap.createBitmap(glView.getDrawingCache());
glView.setDrawingCacheEnabled(false);
But glView.getDrawingCache() is returning me null in the above case, and hence it is crashing in the line Bitmap tmpbm = Bitmap.createBitmap(glView.getDrawingCache());
Why am I getting null from there, and how do I tackle this issue? Also, is there a different / better way to achieve my goal? Any help would be highly appreciated.
Try this method :
private Bitmap createBitmapFromGLSurface(int x, int y, int w, int h, GL10 gl)
throws OutOfMemoryError {
int bitmapBuffer[] = new int[w * h];
int bitmapSource[] = new int[w * h];
IntBuffer intBuffer = IntBuffer.wrap(bitmapBuffer);
intBuffer.position(0);
try {
gl.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, intBuffer);
int offset1, offset2;
for (int i = 0; i < h; i++) {
offset1 = i * w;
offset2 = (h - i - 1) * w;
for (int j = 0; j < w; j++) {
int texturePixel = bitmapBuffer[offset1 + j];
int blue = (texturePixel >> 16) & 0xff;
int red = (texturePixel << 16) & 0x00ff0000;
int pixel = (texturePixel & 0xff00ff00) | red | blue;
bitmapSource[offset2 + j] = pixel;
}
}
} catch (GLException e) {
return null;
}
return Bitmap.createBitmap(bitmapSource, w, h, Bitmap.Config.ARGB_8888);
}
more here
Related
I am trying to convert from Bitmap to Mat using opencv android from org.bytedeco.javacpp-presets:opencv:3.4.1-1.4.1
This version of opencv does not seem to have Utils.bitmapToMat or Utils.matToBitmap
Here is my code.
Bitmap bradsFace = BitmapFactory.decodeResource(getResources(), R.drawable.brad_face);
int orgWidth = bradsFace.getWidth();
int orgHeight = bradsFace.getHeight();
int[] pixels = new int[orgWidth * orgHeight];
bradsFace.getPixels(pixels, 0, orgWidth, 0, 0, orgWidth, orgHeight);
Mat m = new Mat(orgHeight, orgWidth, CvType.CV_8UC4);
int id = 0;
for(int row = 0; row < orgHeight; row++) {
for(int col = 0; col < orgWidth; col++) {
int color = pixels[id];
int a = (color >> 24) & 0xff; // or color >>> 24
int r = (color >> 16) & 0xff;
int g = (color >> 8) & 0xff;
int b = (color ) & 0xff;
m.put(row, col, new int[]{a, r, g, b});
id++;
}
}
In the .put method for Mat object my app is crashing. Am I doing something wrong?
In case anyone is looking for a method of converting Mat to Bitmap in android. I was able to find something.
public static Bitmap matToBitmap(Mat m) {
int mWidth = m.width();
int mHeight = m.height();
int[] pixels = new int[mWidth * mHeight];
int type = m.type();
int index = 0;
int a = 255;
int b = 255;
int g = 255;
int r = 255;
double[] abgr;
Bitmap bitmap = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
try {
for (int ro = 0; ro < m.rows(); ro++) {
for (int c = 0; c < m.cols(); c++) {
abgr = m.get(ro, c);
if(type == CvType.CV_8UC4) {
a = (int)abgr[0];
b = (int)abgr[1];
g = (int)abgr[2];
r = (int)abgr[3];
} else if(type == CvType.CV_8UC3) {
a = 255;
b = (int)abgr[0];
g = (int)abgr[1];
r = (int)abgr[2];
}
int color = ((a << 24) & 0xFF000000) +
((r << 16) & 0x00FF0000) +
((g << 8) & 0x0000FF00) +
(b & 0x000000FF);
pixels[index] = color;
index++;
}
}
bitmap.setPixels(pixels, 0, mWidth, 0, 0, mWidth, mHeight);
}catch(Exception e) {
return bitmap;
}
return bitmap;
}
Hi i am applying the effect on bitmap images using the 4x4 color lut.Effects are applied but some noise(shades in the image) in the effected images.
How to avoid this noise and image quality also missing?
i am using below code to apply the filter effects.
final static int X_DEPTH = 16;
final static int Y_DEPTH = 16; //One little square has 16x16 pixels in it
final static int ROW_DEPTH = 4;
final static int COLUMN_DEPTH = 4; // the image consists of 4x4 little squares
final static int COLOR_DISTORTION = 16; // 256*256*256 => 256 no distortion, 64*64*64 => 256 dividied by 4 = 64, 16x16x16 => 256 dividied by 16 = 16
private Bitmap applyLutToBitmap(Bitmap src, Bitmap lutBitmap) {
int lutWidth = lutBitmap.getWidth();
int lutColors[] = new int[lutWidth * lutBitmap.getHeight()];
lutBitmap.getPixels(lutColors, 0, lutWidth, 0, 0, lutWidth, lutBitmap.getHeight());
int mWidth = src.getWidth();
int mHeight = src.getHeight();
int[] pix = new int[mWidth * mHeight];
src.getPixels(pix, 0, mWidth, 0, 0, mWidth, mHeight);
int R, G, B;
for (int y = 0; y < mHeight; y++)
for (int x = 0; x < mWidth; x++) {
int index = y * mWidth + x;
int r = ((pix[index] >> 16) & 0xff) / COLOR_DISTORTION;
int g = ((pix[index] >> 8) & 0xff) / COLOR_DISTORTION;
int b = (pix[index] & 0xff) / COLOR_DISTORTION;
int lutIndex = getLutIndex(lutWidth, r, g, b);
R = ((lutColors[lutIndex] >> 16) & 0xff);
G = ((lutColors[lutIndex] >> 8) & 0xff);
B = ((lutColors[lutIndex]) & 0xff);
pix[index] = 0xff000000 | (R << 16) | (G << 8) | B;
}
Bitmap filteredBitmap = Bitmap.createBitmap(mWidth, mHeight, src.getConfig());
filteredBitmap.setPixels(pix, 0, mWidth, 0, 0, mWidth, mHeight);
return filteredBitmap;
}
//the magic happens here
private int getLutIndex(int lutWidth, int redDepth, int greenDepth, int blueDepth) {
int lutX = (greenDepth % ROW_DEPTH) * X_DEPTH + blueDepth;
int lutY = (greenDepth / COLUMN_DEPTH) * Y_DEPTH + redDepth;
return lutY * lutWidth + lutX;
}
below link is my lut ,original image and filter effected image http://imgur.com/a/4BVio
please see the effected image some noise are coming in the effected image and quality of the image also missing.How to apply filter effects with out noise and without missing the image quality using the 4x4 lut?
here i have a question on LUTs in android.
my question is, i have 4X4 LUTs, Using these LUTs apply filter effect for bitmap image in android. Below is my sample LUT file link.
Lut link sample
Is it Possible in android? if possible please help me how to apply.
Thanks in advance.
I'm working on a LUT applier library which eases the use of LUT images in Android. It uses the algorythm below, but I'd like to enhance it in the future for optimising memory usage. Now it also guesses the color axes of the LUT:
https://github.com/dntks/easyLUT/wiki
Your LUT image has the red-green-blue color dimension in an other order than what I've been used to, so I had to change the order of when getting the lutIndex (at getLutIndex()).
Please check my edited answer:
final static int X_DEPTH = 16;
final static int Y_DEPTH = 16; //One little square has 16x16 pixels in it
final static int ROW_DEPTH = 4;
final static int COLUMN_DEPTH = 4; // the image consists of 4x4 little squares
final static int COLOR_DISTORTION = 16; // 256*256*256 => 256 no distortion, 64*64*64 => 256 dividied by 4 = 64, 16x16x16 => 256 dividied by 16 = 16
private Bitmap applyLutToBitmap(Bitmap src, Bitmap lutBitmap) {
int lutWidth = lutBitmap.getWidth();
int lutColors[] = new int[lutWidth * lutBitmap.getHeight()];
lutBitmap.getPixels(lutColors, 0, lutWidth, 0, 0, lutWidth, lutBitmap.getHeight());
int mWidth = src.getWidth();
int mHeight = src.getHeight();
int[] pix = new int[mWidth * mHeight];
src.getPixels(pix, 0, mWidth, 0, 0, mWidth, mHeight);
int R, G, B;
for (int y = 0; y < mHeight; y++)
for (int x = 0; x < mWidth; x++) {
int index = y * mWidth + x;
int r = ((pix[index] >> 16) & 0xff) / COLOR_DISTORTION;
int g = ((pix[index] >> 8) & 0xff) / COLOR_DISTORTION;
int b = (pix[index] & 0xff) / COLOR_DISTORTION;
int lutIndex = getLutIndex(lutWidth, r, g, b);
R = ((lutColors[lutIndex] >> 16) & 0xff);
G = ((lutColors[lutIndex] >> 8) & 0xff);
B = ((lutColors[lutIndex]) & 0xff);
pix[index] = 0xff000000 | (R << 16) | (G << 8) | B;
}
Bitmap filteredBitmap = Bitmap.createBitmap(mWidth, mHeight, src.getConfig());
filteredBitmap.setPixels(pix, 0, mWidth, 0, 0, mWidth, mHeight);
return filteredBitmap;
}
//the magic happens here
private int getLutIndex(int lutWidth, int redDepth, int greenDepth, int blueDepth) {
int lutX = (greenDepth % ROW_DEPTH) * X_DEPTH + blueDepth;
int lutY = (greenDepth / COLUMN_DEPTH) * Y_DEPTH + redDepth;
return lutY * lutWidth + lutX;
}
This is how you would process an image with RenderScript's ScriptIntrinsic3DLUT
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.os.AsyncTask;
import android.os.Bundle;
import android.renderscript.Allocation;
import android.renderscript.Element;
import android.renderscript.RenderScript;
import android.renderscript.ScriptIntrinsic3DLUT;
import android.renderscript.Type;
import android.support.v7.app.AppCompatActivity;
import android.widget.ImageView;
public class MainActivity extends AppCompatActivity {
ImageView imageView1;
RenderScript mRs;
Bitmap mBitmap;
Bitmap mLutBitmap;
ScriptIntrinsic3DLUT mScriptlut;
Bitmap mOutputBitmap;
Allocation mAllocIn;
Allocation mAllocOut;
Allocation mAllocCube;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
imageView1 = (ImageView) findViewById(R.id.imageView);
mRs = RenderScript.create(this);
Background background = new Background();
background.execute();
}
class Background extends AsyncTask<Void, Void, Void> {
#Override
protected Void doInBackground(Void... params) {
if (mRs == null) {
mRs = RenderScript.create(MainActivity.this);
}
if (mBitmap == null) {
mBitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.bugs);
mOutputBitmap = Bitmap.createBitmap(mBitmap.getWidth(), mBitmap.getHeight(), mBitmap.getConfig());
mAllocIn = Allocation.createFromBitmap(mRs, mBitmap);
mAllocOut = Allocation.createFromBitmap(mRs, mOutputBitmap);
}
if (mLutBitmap == null) {
mLutBitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.dawizfe);
int w = mLutBitmap.getWidth();
int h = mLutBitmap.getHeight();
int redDim = w / 4;
int greenDim = h / 4;
int blueDim = 16;
android.renderscript.Type.Builder tb = new Type.Builder(mRs, Element.U8_4(mRs));
tb.setX(redDim);
tb.setY(greenDim);
tb.setZ(blueDim);
Type t = tb.create();
mAllocCube = Allocation.createTyped(mRs, t);
int[] pixels = new int[w * h];
int[] lut = new int[w * h];
mLutBitmap.getPixels(pixels, 0, w, 0, 0, w, h);
int i = 0;
for (int r = 0; r < redDim; r++) {
for (int g = 0; g < greenDim; g++) {
for (int b = 0; b < blueDim; b++) {
int gdown = g / 4;
int gright = g % 4;
lut[i] = pixels[b + r * w + gdown * w * redDim + gright * blueDim];
i++;
}
}
}
// This is an identity 3D LUT
// i = 0;
// for (int r = 0; r < redDim; r++) {
// for (int g = 0; g < greenDim; g++) {
// for (int b = 0; b < blueDim; b++) {
// int bcol = (b * 255) / blueDim;
// int gcol = (g * 255) / greenDim;
// int rcol = (r * 255) / redDim;
// lut[i] = bcol | (gcol << 8) | (rcol << 16);
// i++;
// }
// }
// }
mAllocCube.copyFromUnchecked(lut);
}
if (mScriptlut == null) {
mScriptlut = ScriptIntrinsic3DLUT.create(mRs, Element.U8_4(mRs));
}
mScriptlut.setLUT(mAllocCube);
mScriptlut.forEach(mAllocIn, mAllocOut);
mAllocOut.copyTo(mOutputBitmap);
return null;
}
#Override
protected void onPostExecute(Void aVoid) {
imageView1.setImageBitmap(mOutputBitmap);
}
}
}
u can go through this, hope it will help you to get the right process.
photo is the main bitmap here.
mLut3D is the array of LUT images stored in drawable
RenderScript mRs;
Bitmap mLutBitmap, mBitmap;
ScriptIntrinsic3DLUT mScriptlut;
Bitmap mOutputBitmap;
Allocation mAllocIn;
Allocation mAllocOut;
Allocation mAllocCube;
int mFilter = 0;
mRs = RenderScript.create(yourActivity.this);
public Bitmap filterapply() {
int redDim, greenDim, blueDim;
int w, h;
int[] lut;
if (mScriptlut == null) {
mScriptlut = ScriptIntrinsic3DLUT.create(mRs, Element.U8_4(mRs));
}
if (mBitmap == null) {
mBitmap = photo;
}
mOutputBitmap = Bitmap.createBitmap(mBitmap.getWidth(),
mBitmap.getHeight(), mBitmap.getConfig());
mAllocIn = Allocation.createFromBitmap(mRs, mBitmap);
mAllocOut = Allocation.createFromBitmap(mRs, mOutputBitmap);
// }
mLutBitmap = BitmapFactory.decodeResource(getResources(),
mLut3D[mFilter]);
w = mLutBitmap.getWidth();
h = mLutBitmap.getHeight();
redDim = w / h;
greenDim = redDim;
blueDim = redDim;
int[] pixels = new int[w * h];
lut = new int[w * h];
mLutBitmap.getPixels(pixels, 0, w, 0, 0, w, h);
int i = 0;
for (int r = 0; r < redDim; r++) {
for (int g = 0; g < greenDim; g++) {
int p = r + g * w;
for (int b = 0; b < blueDim; b++) {
lut[i++] = pixels[p + b * h];
}
}
}
Type.Builder tb = new Type.Builder(mRs, Element.U8_4(mRs));
tb.setX(redDim).setY(greenDim).setZ(blueDim);
Type t = tb.create();
mAllocCube = Allocation.createTyped(mRs, t);
mAllocCube.copyFromUnchecked(lut);
mScriptlut.setLUT(mAllocCube);
mScriptlut.forEach(mAllocIn, mAllocOut);
mAllocOut.copyTo(mOutputBitmap);
return mOutputBitmap;
}
you increase the mFilter value to get different filter effect with different LUT images, you have, check it out.
you can go through the this link on github for more help, i got the answer from here:-
https://github.com/RenderScript/RsLutDemo
hope it will help
I am creating a bitmap from glsurfaceview and adding it to a arraylist but when I created a bitmap from glsurfaceview it gives outofmemory error
CODE:
Bitmap bitmap = createBitmapFromGLSurface(0, 0, mEffectView.getWidth(),
mEffectView.getHeight(), gl);
al_bitmaps.add(bitmap);
Method:
private Bitmap createBitmapFromGLSurface(int x, int y, int w, int h, GL10 gl)
throws OutOfMemoryError {
int bitmapBuffer[] = new int[w * h];
int bitmapSource[] = new int[w * h];
IntBuffer intBuffer = IntBuffer.wrap(bitmapBuffer);
intBuffer.position(0);
try {
gl.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE,
intBuffer);
int offset1, offset2;
for (int i = 0; i < h; i++) {
offset1 = i * w;
offset2 = (h - i - 1) * w;
for (int j = 0; j < w; j++) {
int texturePixel = bitmapBuffer[offset1 + j];
int blue = (texturePixel >> 16) & 0xff;
int red = (texturePixel << 16) & 0x00ff0000;
int pixel = (texturePixel & 0xff00ff00) | red | blue;
bitmapSource[offset2 + j] = pixel;
}
}
} catch (GLException e) {
return null;
}
return Bitmap.createBitmap(bitmapSource, w, h, Bitmap.Config.ARGB_8888);
}
For larger resolution device (ex. Samsung Galaxy S4), my app crashes.
I would like to know how to set inSampleSize of that bitmap.
First of all i will recommended to you use
android:largeHeap="true"
in your manifest file.
Secondary if that not helped to you look that https://stackoverflow.com/a/17839597/2956344
Also i recommend to you know about maximum texture size of your GLSufaceView to check limit of your bitmap resolution programmatically.
Get Maximum OpenGL ES 2.0 Texture Size Limit in Android
P.S. I think you find information about use GL10.
i want blur a image,i used :
public Bitmap mohu(Bitmap bmpOriginal,int hRadius,int vRadius) {
int width, height, r,g, b, c,a, gry,c1,a1,r1,g1,b1,red,green,blue;
height = bmpOriginal.getHeight();
width = bmpOriginal.getWidth();
int iterations = 5;
int[] inPixels = new int[width*height];
int[] outPixels = new int[width*height];
Bitmap bmpSephia = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
Canvas canvas = new Canvas(bmpSephia);
Paint paint = new Paint();
int i=0;
canvas.drawBitmap(bmpOriginal, 0, 0, null);
for(int x=0; x < width; x++) {
for(int y=0; y< height; y++) {
c = bmpOriginal.getPixel(x, y);
inPixels[i]=c;
i++;
}
}
for (int k = 0; k< iterations; k++ ) {
blur( inPixels, outPixels, width, height, hRadius );
blur( outPixels, inPixels, height, width, vRadius );
}
bmpSephia.setPixels(outPixels, 0, width, 0, 0, width, height);
return bmpSephia;
}
public static void blur( int[] in, int[] out, int width, int height, int radius ) {
int widthMinus1 = width-1;
int tableSize = 2*radius+1;
int divide[] = new int[256*tableSize];
for ( int i = 0; i < 256*tableSize; i++ )
divide[i] = i/tableSize;
int inIndex = 0;
for ( int y = 0; y < height; y++ ) {
int outIndex = y;
int ta = 0, tr = 0, tg = 0, tb = 0;
for ( int i = -radius; i <= radius; i++ ) {
int rgb = in[inIndex + ImageMath.clamp(i, 0, width-1)];
ta += (rgb >> 24) & 0xff;
tr += (rgb >> 16) & 0xff;
tg += (rgb >> 8) & 0xff;
tb += rgb & 0xff;
}
for ( int x = 0; x < width; x++ ) {
out[ outIndex ] = (divide[ta] << 24) | (divide[tr] << 16) | (divide[tg] << 8) | divide[tb];
int i1 = x+radius+1;
if ( i1 > widthMinus1 )
i1 = widthMinus1;
int i2 = x-radius;
if ( i2 < 0 )
i2 = 0;
int rgb1 = in[inIndex+i1];
int rgb2 = in[inIndex+i2];
ta += ((rgb1 >> 24) & 0xff)-((rgb2 >> 24) & 0xff);
tr += ((rgb1 & 0xff0000)-(rgb2 & 0xff0000)) >> 16;
tg += ((rgb1 & 0xff00)-(rgb2 & 0xff00)) >> 8;
tb += (rgb1 & 0xff)-(rgb2 & 0xff);
outIndex += height;
}
inIndex += width;
}
}
///
public static float clamp(float x, float a, float b) {
return (x < a) ? a : (x > b) ? b : x;
}
the method for some images is good,for some images .the effect not well,ot looks very rude,can you give me some advice, i have readed http://www.jhlabs.com/ip/blurring.html
and http://java.sun.com/products/java-media/jai/forDevelopers/jai1_0_1guide-unc/Image-enhance.doc.html#51172 ,but i cannot find good method for android
Your intermediate image is RGB565. That means 16 bits, 5 bits for R, 6 for G and 5 for B. If the original image is RGB888 then it would look bad after blurring. Can you not create an intermediate image in the same format as the original ?
Also, if the original image is RGB888, how is it converted to 565 ? Your code has:
c = bmpOriginal.getPixel(x, y);
inPixels[i]=c;
It looks like there is no controlled conversion.
Your blur function is for an ARGB image. As well as being inefficient since you've hard coded for 565, if the original image is ARGB8888 then your conversion to RGB565 is going to do strange things with the alpha channel.
If this answer is not enough, it would be helpful to see some "bad" images that this code creates.