i have code to load image from sdcard and post it to ImageView.
Mat mRgba = Highgui.imread(dir);
Bitmap bmp = Bitmap.createBitmap(mRgba.cols(), mRgba.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mRgba, bmp);
mImage.setImageBitmap(bmp, true, null, 5.0f);
the image is loaded but it's wrong color. Color seem to be inverted (but not inverted).
Here is image comparison
I tried to load image by
Bitmap bmp = BitmapFactory.decodeFile(dir);
It worked correctly. But i have to use Highgui.imread.
What wrong with my code?
You will have to use something like this:
Mat inputImage = Highgui.imread(pathToFile);
Mat tmp = new Mat();
Imgproc.cvtColor(inputImage, tmp, Imgproc.COLOR_BGR2RGB);
Bitmap imageToShow = Bitmap.createBitmap(tmp.cols(), tmp.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(tmp, imageToShow);
You're trying to load a bitmap supposing that the image is 8-bit/color RGBA: are you sure of that?
Also note that ARGB is not RGBA. You may need to re-arrange the bytes of each pixel. Something like
int pixel = get_the_pixel();
int alpha = 0xff & pixel;
pixel = pixel<<8 | alpha;
set_the_pixel(pixel);
You'll want to do something more efficient than accessor methods shown here, but you get the idea.
Related
I am using OpenCV android library thresholding method for image segmentation, but the problem is that the output bitmap contains black background which I do not want please note that original image does not have any black background it is actually white. I am attaching the code for your reference, I am new to opencv and don't have much understanding about it also so kindly help me out.
private void Segmentation() {
Mat srcMat = new Mat();
gray = new Mat();
Utils.bitmapToMat(imageBmp, srcMat);
Imgproc.cvtColor(srcMat, gray, Imgproc.COLOR_RGBA2GRAY);
grayBmp = Bitmap.createBitmap(imageBmp.getWidth(), imageBmp.getHeight(), Bitmap.Config.RGB_565);
Utils.matToBitmap(gray, grayBmp);
grayscaleHistogram();
Mat threshold = new Mat();
Imgproc.threshold(gray, threshold, 0, 255, Imgproc.THRESH_BINARY_INV + Imgproc.THRESH_OTSU);
thresBmp = Bitmap.createBitmap(imageBmp.getWidth(), imageBmp.getHeight(), Bitmap.Config.RGB_565);
Utils.matToBitmap(threshold, thresBmp);
Mat closing = new Mat();
Mat kernel = Mat.ones(5, 5, CvType.CV_8U);
Imgproc.morphologyEx(threshold, closing, Imgproc.MORPH_CLOSE, kernel, new Point(-1, -1), 3);
closingBmp = Bitmap.createBitmap(imageBmp.getWidth(), imageBmp.getHeight(), Bitmap.Config.RGB_565);
Utils.matToBitmap(closing, closingBmp);
result = new Mat();
Core.subtract(closing, gray, result);
Core.subtract(closing, result, result);
resultBmp = Bitmap.createBitmap(imageBmp.getWidth(), imageBmp.getHeight(), Bitmap.Config.RGB_565);
Utils.matToBitmap(result, resultBmp);
Glide.with(ResultActivity.this).asBitmap().load(resultBmp).into(ivAfter);
}
enter image description here
What exactly do you want it to be then? Binary thresholding works like this:
if value < threshold:
value = 0
else:
value = 1
Of course you can convert it to a grayscale / RGB image and adjust the background to your liking. You can also invert your image (white background, black segmentation) by using the ~ operator.
segmented_image = ~ segmented_image
Edit: OpenCV has a dedicated flag to invert the results: CV_THRESH_BINARY_INV You are already using it, maybe try changing it to CV_THRESH_BINARY
So I make a bitmap from a blob with the next code:
byte[] blob = contact.getMP();
ByteArrayInputStream inputStream = new ByteArrayInputStream(blob);
Bitmap bitmap = BitmapFactory.decodeStream(inputStream);
Bitmap scalen = Bitmap.createScaledBitmap(bitmap, 320, 240, false);
and it gives back the next output, which is good
Then I do the following to make the bitmap into a Mat, but then my colors just change...
//Mat ImageMat = new Mat();
Mat ImageMat = new Mat(320, 240, CvType.CV_32F);
Utils.bitmapToMat(scalen, ImageMat);
I have no idea why, nor another way to make the bitmap into a Mat. What is wrong?
The format of color channels in Android Bitmap are RGB
But in opencv Mat, the channels are BGR by default.
So when you do Utils.bitmapToMat(), [B,G,R] values are stored in [R,G,B] channels. The red and blue channels are interchanged.
One possible solution is to apply cvtcolor on the opencv Mat you got as below:
Imgproc.cvtColor(ImageMat, ImageMat, Imgproc.COLOR_BGR2RGB);
It worked for me.
What is possible solution to banded images in Android Activity or in OpenGl.
Look at the answer below.
Hope it Helps
Color Banding Solved ooooooooooyyyyyyyeaaaaaaaaaa
I solved color banding in two phases
1) * when we use the BitmapFactory to decode resources it decodes the resource in RGB565 which shows color banding, instead of using ARGB_8888, so i used BitmapFactory.Options for setting the decode options to ARGB_8888
second problem was whenever i scaled the bitmap it again got banded
2) This was the tough part and took a lot of searching and finally worked
* the method Bitmap.createScaledBitmap for scaling bitmaps also reduced the images to RGB565 format after scaling i got banded images(the old method for solving this was using at least one transparent pixel in a png but no other format like jpg or bmp worked)so here i created a method CreateScaledBitmap to scale the bitmap with the original bitmaps configurations in the resulting scale bitmap(actually i copied the method from a post by logicnet.dk and translated in java)
BitmapFactory.Options myOptions = new BitmapFactory.Options();
myOptions.inDither = true;
myOptions.inScaled = false;
myOptions.inPreferredConfig = Bitmap.Config.ARGB_8888;//important
//myOptions.inDither = false;
myOptions.inPurgeable = true;
Bitmap tempImage =
BitmapFactory.decodeResource(getResources(),R.drawable.defaultart, myOptions);//important
//this is important part new scale method created by someone else
tempImage = CreateScaledBitmap(tempImage,300,300,false);
ImageView v = (ImageView)findViewById(R.id.imageView1);
v.setImageBitmap(tempImage);
// the function
public static Bitmap CreateScaledBitmap(Bitmap src, int dstWidth, int dstHeight, boolean filter)
{
Matrix m = new Matrix();
m.setScale(dstWidth / (float)src.getWidth(), dstHeight / (float)src.getHeight());
Bitmap result = Bitmap.createBitmap(dstWidth, dstHeight, src.getConfig());
Canvas canvas = new Canvas(result);
//using (var canvas = new Canvas(result))
{
Paint paint = new Paint();
paint.setFilterBitmap(filter);
canvas.drawBitmap(src, m, paint);
}
return result;
}
Please correct me if i am wrong.
Also comment if it worked for you.
I am so happy i solved it, Hope it works for you All.
For OpenGl you simply bind the bitmap created after applying upper functions
I am using opencv to convert android bitmap to grescale by using opencv. below is the code i am using,
IplImage image = IplImage.create( bm.getWidth(), bm.getHeight(), IPL_DEPTH_8U, 4); //creates default image
bm.copyPixelsToBuffer(image.getByteBuffer());
int w=image.width();
int h=image.height();
IplImage grey=cvCreateImage(cvSize(w,h),image.depth(),1);
cvCvtColor(image,grey,CV_RGB2GRAY);
bm is source image. This code works fine and converts to greyscale, i have tested it by saving to sdcard and then loading again, but when i try to load it using below method my app crashes, any suggestions.
bm.copyPixelsFromBuffer(grey.getByteBuffer());
iv1.setImageBitmap(bm);
iv1 is imageview where i want to set the bm.
I've never used the OpenCV bindings for Android, but here's some code to get you started. Regard it as pseudocode, because I can't try it out... but you'll get the basic idea. It may not be the fastest solution. I'm pasting from this answer.
public static Bitmap IplImageToBitmap(IplImage src) {
int width = src.width;
int height = src.height;
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
for(int r=0;r<height;r++) {
for(int c=0;c<width;c++) {
int gray = (int) Math.floor(cvGet2D(src,r,c).getVal(0));
bitmap.setPixel(c, r, Color.argb(255, gray, gray, gray));
}
}
return bitmap;
}
Your IplImage grey has only one channel, and your Bitmap bm has 4 or 3 (ARGB_8888, ARGB_4444, RGB_565). Therefore the bm can't store the greyscale image. You have to convert it to rgba before use.
Example:
(your code)
IplImage image = IplImage.create( bm.getWidth(), bm.getHeight(), IPL_DEPTH_8U, 4);
bm.copyPixelsToBuffer(image.getByteBuffer());
int w=image.width(); int h=image.height();
IplImage grey=cvCreateImage(cvSize(w,h),image.depth(),1);
cvCvtColor(image,grey,CV_RGB2GRAY);
If you want to load it:
(You can reuse your image or create another one (temp))
IplImage temp = cvCreateImage(cvSize(w,h), IPL_DEPTH_8U, 4); // 4 channel
cvCvtColor(grey, temp , CV_GRAY2RGBA); //color conversion
bm.copyPixelsFromBuffer(temp.getByteBuffer()); //now should work
iv1.setImageBitmap(bm);
I might it will help!
I am reading a raw image from the network. This image has been read by an image sensor, not from a file.
These are the things I know about the image:
~ Height & Width
~ Total size (in bytes)
~ 8-bit grayscale
~ 1 byte/pixel
I'm trying to convert this image to a bitmap to display in an imageview.
Here's what I tried:
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.outHeight = shortHeight; //360
opt.outWidth = shortWidth;//248
imageBitmap = BitmapFactory.decodeByteArray(imageArray, 0, imageSize, opt);
decodeByteArray returns null, since it cannot decode my image.
I also tried reading it directly from the input stream, without converting it to a Byte Array first:
imageBitmap = BitmapFactory.decodeStream(imageInputStream, null, opt);
This returns null as well.
I've searched on this & other forums, but cannot find a way to achieve this.
Any ideas?
EDIT: I should add that the first thing I did was to check if the stream actually contains the raw image. I did this using other applications `(iPhone/Windows MFC) & they are able to read it and display the image correctly. I just need to figure out a way to do this in Java/Android.
Android does not support grayscale bitmaps. So first thing, you have to extend every byte to a 32-bit ARGB int. Alpha is 0xff, and R, G and B bytes are copies of the source image's byte pixel value. Then create the bitmap on top of that array.
Also (see comments), it seems that the device thinks that 0 is white, 1 is black - we have to invert the source bits.
So, let's assume that the source image is in the byte array called Src. Here's the code:
byte [] src; //Comes from somewhere...
byte [] bits = new byte[src.length*4]; //That's where the RGBA array goes.
int i;
for(i=0;i<src.length;i++)
{
bits[i*4] =
bits[i*4+1] =
bits[i*4+2] = ~src[i]; //Invert the source bits
bits[i*4+3] = 0xff; // the alpha.
}
//Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(bits));
Once I did something like this to decode the byte stream obtained from camera preview callback:
Bitmap.createBitmap(imageBytes, previewWidth, previewHeight,
Bitmap.Config.ARGB_8888);
Give it a try.
for(i=0;i<src.length;i++)
{
bits[i*4] = bits[i*4+1] = bits[i*4+2] = ~src[i]; //Invert the source bits
bits[i*4+3] = 0xff; // the alpha.
}
The conversion loop can take a lot of time to convert the 8 bit image to RGBA, a 640x800 image can take more than 500ms... A quicker solution is to use ALPHA8 format for the bitmap and use a color filter:
//setup color filter to inverse alpha, in my case it was needed
float[] mx = new float[]{
1.0f, 0, 0, 0, 0, //red
0, 1.0f, 0, 0, 0, //green
0, 0, 1.0f, 0, 0, //blue
0, 0, 0, -1.0f, 255 //alpha
};
ColorMatrixColorFilter cf = new ColorMatrixColorFilter(mx);
imageView.setColorFilter(cf);
// after set only the alpha channel of the image, it should be a lot faster without the conversion step
Bitmap bm = Bitmap.createBitmap(width, height, Bitmap.Config.ALPHA_8);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(src)); //src is not modified, it's just an 8bit grayscale array
imageview.setImageBitmap(bm);
Use Drawable create from stream. Here's how to do it with an HttpResponse, but you can get the inputstream anyway you want.
InputStream stream = response.getEntity().getContent();
Drawable drawable = Drawable.createFromStream(stream, "Get Full Image Task");