How to Get the CORRECT Pixel Colour - android

I would also like to read out the colour of the pixels from an 'Image'. I've read a lot of topics here about this but the success is still missing.
So after a lot of unwanted values, I tried to simplify my code and the Image as well. I tried with the following:
//..
Android.Graphics.BitmapFactory.Options op = new BitmapFactory.Options();
op.InPreferredConfig = Bitmap.Config.Argb8888;
Bitmap b = Android.Graphics.BitmapFactory.DecodeResource(this.Resources, Resource.Drawable.Image, op);
//imageView.SetImageBitmap(b);
int pixel1 = b.GetPixel(3, 4);
int pixel2 = b.GetPixel(12, 11);
int pixel3 = b.GetPixel(19, 20);
int pixel4 = b.GetPixel(27, 28);
int pixel5 = b.GetPixel(27, 19);
int pixel6 = b.GetPixel(20, 11);
int redV1 = Android.Graphics.Color.GetRedComponent(pixel1);
int greenV1 = Android.Graphics.Color.GetGreenComponent(pixel1);
int blueV1 = Android.Graphics.Color.GetBlueComponent(pixel1);
int redV2 = Android.Graphics.Color.GetRedComponent(pixel2);
int greenV2 = Android.Graphics.Color.GetGreenComponent(pixel2);
int blueV2 = Android.Graphics.Color.GetBlueComponent(pixel2);
//..
Surprisingly I got the results as follows:
pixel1 color is OK.
pixel2 color is NOT OK its pixel1 color.
pixel3 color is NOT OK its pixel2 color.
pixel4 color is NOT OK its pixel2 color.
pixel5 color is NOT OK its pixel2 color.
pixel6 color is NOT OK its existing color in my Image but on different coordinates. (it was not mentioned in my code)
Presentation of the .png Image in an imageView seems to be smooth.
Could anyone help with my fault?
THX!
pixel1..6 values seems to be correct (different on different colours and equal on the same) and they would be enough for me but all the values became -1 after I start to work with my real .png file (1500x1500 image as a resource). Is that too large to work with? What is the maximum in this case? Thx!

... Ok, perhaps I got the fault. To be precise, not the fault but a usable solution! Now I'm trying with a smaller .png Image sized 500x500. It is small enough to work with.
BUT: It seems the BitmapFactory creates a larger image (1000 x 1000). So I have to divide each coordinate with 2 if I want to check the correct colour on my original image. AND there is another problem: Some colours may be different read by the Getpixel(). It can be small differences in R,G,B values. In example: RGB(148,0,198) instead of (153,0,204). It means a 'bit' extra work for me to identify all the new 'Android colours' on my bitmap :( I hope, I could help for the others with this topic. It wasn't easy to find one bug/problem after another.

Related

Get the drawable's name after setting it on ImageView

I'am Actually setting an image to the ImageView Dynamically from the Drawable folder. I got the working code to set the image dynamically but I can't get the name of the image/drawable which gets set on the image view at the moment.
Either I should get the drawable name or any code to get the instance of each drawable with a name which I can set on the ImageView.
Any help is Much Appreciated!!
I searched and tried all answers on StackOverflow.
Set the image dynamically to the imageview and also get that image's name.
As a good practice, you can store the drawable in a variable and update the imageview.
#DrawableRes int myDrawable;
iv.setImageDrawable(getResources().getDrawable(myDrawable));
Setting tag and getting from tag is no ideal imho because it's more maintainable and debuggable.
Ok. after 2 days of research and trials, I finally managed to do what I planned to do. Here's my code, if anybody gets help from it.
ArrayList<Integer> array_fruits = new ArrayList<Integer>();
int fruit_apricot = R.drawable.fruits_apricot;
int fruit_apple = R.drawable.fruits_apple;
int fruit_banana = R.drawable.fruits_banana;
.... and more
array_fruits.add(fruit_apple);
array_fruits.add(fruit_apricot);
array_fruits.add(fruit_banana);
... and more
//WORKING CODE TO GENERATE NUMBER BETWEEN 30 AND 1 (0 to 29 in arraylist)
Random rand = new Random();
int num = rand.nextInt(30) + 1;
array_fruits.add(num);
itemImg.setImageDrawable(getResources().getDrawable(array_fruits.get(num)));
Log.e("num: ", num+"");
Toast.makeText(getApplicationContext(), num+"", Toast.LENGTH_LONG).show();
I had tried almost all solutions given on stackoverflow and other sites.

Improving threshold result for Tesseract

I am kind of stuck with this problem, and I know there are so many questions about it on stack overflow but in my case. Nothing gives the expected result.
The Context:
Am using Android OpenCV along with Tesseract so I can read the MRZ area in the passport. When the camera is started I pass the input frame to an AsyncTask, the frame is processed, the MRZ area is extracted succesfully, I pass the extracted MRZ area to a function prepareForOCR(inputImage) that takes the MRZ area as gray Mat and Will output a bitmap with the thresholded image that I will pass to Tesseract.
The problem:
The problem is while thresholding the Image, I use adaptive thresholding with blockSize = 13 and C = 15, but the result given is not always the same depending on the lighting of the image and the conditions in general from which the frame is taken.
What I have tried:
First I am resizing the image to a specific size (871,108) so the input image is always the same and not dependant on which phone is used.
After resizing, I try with different BlockSize and C values
//toOcr contains the extracted MRZ area
Bitmap toOCRBitmap = Bitmap.createBitmap(bitmap);
Mat inputFrame = new Mat();
Mat toOcr = new Mat();
Utils.bitmapToMat(toOCRBitmap, inputFrame);
Imgproc.cvtColor(inputFrame, inputFrame, Imgproc.COLOR_BGR2GRAY);
TesseractResult lastResult = null;
for (int B = 11; B < 70; B++) {
for (int C = 11; C < 70; C++){
if (IsPrime(B) && IsPrime(C)){
Imgproc.adaptiveThreshold(inputFrame, toOcr, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, B ,C);
Bitmap toOcrBitmap = OpenCVHelper.getBitmap(toOcr);
TesseractResult result = TesseractInstance.extractFrame(toOcrBitmap, "ocrba");
if (result.getMeanConfidence()> 70) {
if (MrzParser.tryParse(result.getText())){
Log.d("Main2Activity", "Best result with " + B + " : " + C);
return result;
}
}
}
}
}
Using the code below, the thresholded result image is a black on white image which gives a confidence greater than 70, I can't really post the whole image for privacy reasons, but here's a clipped one and a dummy password one.
Using the MrzParser.tryParse function which adds checks for the character position and its validity within the MRZ, am able to correct some occurences like a name containing a 8 instead of B, and get a good result but it takes so much time, which is normal because am thresholding almost 255 images in the loop, adding to that the Tesseract call.
I already tried getting a list of C and B values which occurs the most but the results are different.
The question:
Is there a way to define a C and blocksize value so that it s always giving the same result, maybe adding more OpenCV calls so The input image like increasing contrast and so on, I searched the web for 2 weeks now I can't find a viable solution, this is the only one that is giving accurate results
You can use a clustering algorithm to cluster the pixels based on color. The characters are dark and there is a good contrast in the MRZ region, so a clustering method will most probably give you a good segmentation if you apply it to the MRZ region.
Here I demonstrate it with MRZ regions obtained from sample images that can be found on the internet.
I use color images, apply some smoothing, convert to Lab color space, then cluster the a, b channel data using kmeans (k=2). The code is in python but you can easily adapt it to java. Due to the randomized nature of the kmeans algorithm, the segmented characters will have label 0 or 1. You can easily sort it out by inspecting cluster centers. The cluster-center corresponding to characters should have a dark value in the color space you are using.
I just used the Lab color space here. You can use RGB, HSV or even GRAY and see which one is better for you.
After segmenting like this, I think you can even find good values for B and C of your adaptive-threshold using the properties of the stroke width of the characters (if you think the adaptive-threshold gives a better quality output).
import cv2
import numpy as np
im = cv2.imread('mrz1.png')
# convert to Lab
lab = cv2.cvtColor(cv2.GaussianBlur(im, (3, 3), 1), cv2.COLOR_BGR2Lab)
im32f = np.array(im[:, :, 1:3], dtype=np.float32)
k = 2 # 2 clusters
term_crit = (cv2.TERM_CRITERIA_EPS, 30, 0.1)
ret, labels, centers = cv2.kmeans(im32f.reshape([im.shape[0]*im.shape[1], -1]),
k, None, term_crit, 10, 0)
# segmented image
labels = labels.reshape([im.shape[0], im.shape[1]]) * 255
Some results:

Why I'm Getting wrong values with Colour detection (OpenCV)

I have a problem with conversion from BGR to HSV.
I'm programming with Android Studio and testing with my Xperia Z5.
In my code snippet, I'm getting totally wrong colour values:
Scalar LOWER_RED = (0,0,0);
Scalar HIGHER_RED = (30,255,255);
Mat src = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC4);
Mat hsv = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC4);
Utils bitmapToMat(Bitmap, src);
Imgproc.cvtColor(src,hsv,Imgproc.COLOR_BGR2HSV);
Core.inRange(hsv, LOWER_RED, HIGHER_RED, hsv);
Utils.matToBitmap(hsv,Bitmap);
I want to capture red colour. What did I do wrong?
Edit:
I tried with all advices and my Code Snippet looks now this way:
Scalar LOWER_RED = (0,10,100);
Scalar HIGHER_RED = (10,255,255);
Mat src = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC3);
Mat hsv = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC3);
Utils bitmapToMat(Bitmap, src);
Imgproc.cvtColor(src,hsv,Imgproc.COLOR_BGR2HSV);
Core.inRange(hsv, LOWER_RED, HIGHER_RED, hsv);
Utils.matToBitmap(hsv,Bitmap);
The Outcome is a black screen ( no matches )
with
Core.inRange(hsv,New Scalar(0,0,0),New Scalar(10,255,255),HighRedRange);
Core.inRange(hsv,New Scalar(160,100,100),New Scalar(179,255,255),LowRedRange);
Core.addWeighted(LowRedRange,1.0,HighredRange,1.0,0.0,hsv);
The vegetables are black and the white background is white in hsv
0,0,0 - 10,255,255 AND 160,100,100 - 179,255,255
If I use a Scalar from 110,100,100 until 135,255,255, then the red pepper is white and the back ground black ( correctly detected ).
Source Picture:
And I dont understand all this...
There is a good tutorial here. It's for C++ but the general idea is the same. I tried the general idea and it surely works. The problem is that your range is too broad. In OpenCV, Hue range is in 0-180. Meaning that your higher limit goes to 30*2 = 60 which includes nearly all yellow range too.
I set the range from 0 to 10 for Hue, but remember you may also want to get 160 - 179 range which also includes some part of red. For this, you just need a second mask and then combine them with simple addition.
The example code in Python:
import cv2
import numpy as np
img = cv2.imread('peppers.jpg',1)
im_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
thresh_low = np.array([0,100,100])
thresh_high = np.array([10,255,255])
mask = cv2.inRange(im_hsv, thresh_low, thresh_high)
im_masked = cv2.bitwise_and(img,img, mask= mask)
cv2.imshow('Masked',im_masked)
cv2.waitKey(0)
Original image:
Result:
I know now my problem
it's this:
Imgproc.cvtColor(src,hsv,Imgproc.COLOR_BGR2HSV);
With RGB2HSV all values are correct.
I thought on Android Smartphones there is BGR used ?
However, Big thanks for all answers.
I wish all of you a great day :)

getDesiredMinimumHeight/Width() in 'Wallapper manager'

i looked at these 2 function in the documentation here
i want to get the desired wallaper dimensions,
running those functions on an SGS3 (1280x720) with stock launcher,
i got both minDesiredWidth + minDesiredHight: 1280x1280
same thing with a Note 3 (1920x1080) i got 1920x1920
i want to know the desired ratio of wallpaper the device wants, and i thought i would get it from those 2 functions.
both those devices stock launchers have a static background image of their respective screen resolutions, so why does getDesiredMinimumWidth doesn't give me 1280/1080 for each device respectively?
how do i know the proper ratio for the device?
This is the intended result of the methods, the code used in the WallpaperManager class is:
return sGlobals.mService.getHeightHint();
and
return sGlobals.mService.getWidthHint();
It isn't mentioned anywhere why they return the same value, but to get the true values of WxH, you should use:
Point displaySize = new Point();
getWindowManager().getDefaultDisplay().getRealSize(displaySize);
and refer to the values with int width = displaySize.x and int height = displaySize.y

compare 2 images to avoid duplication

I am comparing 2 similar images and would like to see if both are similar .Currently I used the code:
public void foo(Bitmap bitmapFoo) {
int[] pixels;
int height = bitmapFoo.getHeight();
int width = bitmapFoo.getWidth();
pixels = new int[height * width];
bitmapFoo.getPixels(pixels, 0, width, 1, 1, width - 1, height - 1);
}
and I call the function : foo(img1) where :
img1=(Bitmap)data.getExtras().get("data");
I would like to know how to get the above getpixel,I tried assigning it to variable but did not work .Should it have a return type ?? and in format it is ?
And also how do I compare 2 images??
Also both the images may be of different dimensions based on the mobile camera the snapshot is taken from .
Also can it recognize if the same image is shot in the morning and night ???
Thanks in Advance.
This code will compare pixel from base image with other image.
If both pixel at location (x,y) are same then add same pixel in result image without change. Otherwise it modifies that pixel and add to our result image.
In case of baseline image height or width is larger than other image then it will add red color for extra pixel which is not available in other images.
Both image file format should be same for comparison.
Code will use base image file format to create resultant image file and resultant image contains highlighted portion where difference observed.
Here is a Link To Code with sample example attached.
If you want to copy the pixels of the bitmap to a byte array, the easiest way is:
int height = bitmapFoo.getHeight();
int width = bitmapFoo.getWidth();
pixels = new int[height * width];
bitmapFoo.copyPixelsToBuffer(pixels);
See the documentation
I should warn you that you will have to handle this with care, any other way you will get OutOfMemoryError.
To get All Pixels
bitmapFoo.copyPixelsToBuffer(pixels);
or
bitmapFoo.getPixels(pixels, 0, width, 0, 0, width, height);
To get One Pixel
The two arguments must be two integers within the range [0, getWidth()-1] and [0, getHeight()-1]
int pix = bitmapFoo.getPixel(x, y);

Categories

Resources