OpenCV Android Green Color Detection - android

currently I'm making an app where user will detect green colors. I use this photo for testing:
My problem is that I can not detect any green pixel. Before I worked with blue color and everything worked fine. Now I can't detect anything though I tried different combinations of RGB. I wanted to know whether it's problem with green or my detection range, so I made an image in paint using (0, 255, 0) and it worked. Why it can't see this circle then? I use this code for detection:
Core.inRange(hsv_image, new Scalar([I change this value]), new Scalar(60, 255, 255), ultimate_blue);
It could have been that I set wrong Range, but I use Photoshop to get color of one of green pixels and convert RGB value of it into HSV. Yet it doesn't work. It don't detect even pixel that I've sampled. What's wrong? Thanks in advance.
Using Miki's answer:

Green color is HSV space has H = 120 and it's in range [0, 360].
OpenCV halves the H values to fit the range [0,255], so H value instead of being in range [0, 360], is in range [0, 180].
S and V are still in range [0, 255].
As a consequence, the value of H for green is 60 = 120 / 2.
You upper and lower bound should be:
// sensitivity is a int, typically set to 15 - 20
[60 - sensitivity, 100, 100]
[60 + sensitivity, 255, 255]
UPDATE
Since your image is quite dark, you need to use a lower bound for V. With these values:
sensitivity = 15;
[60 - sensitivity, 100, 50] // lower bound
[60 + sensitivity, 255, 255] // upper bound
the resulting mask would be like:
You can refer to this answer for the details.

Related

OpenCV InRange parameter

I'm using OpenCV on Android to find circles of specific colour's in real time. My first step is to keep only pixels which corresponds to my defined color i'm looking for (red or green in this example). Example Image.
For this purpose i'm using the method inRange().
Here is my Question: What kind of color model (RGB, BGR, HSV, ..) is required as lower-/upper-bound color parameter's? And: what is a good practice to define these color bounds in respect to natural brightness changes?
matRgba = inputFrame.rgba();
Scalar lowerColorBound = Scalar(0.0, 0.0, 0.0); // Blue, Green, Red?
Scalar upperColorBound = Scalar(0.0, 0.0, 0.0);
// convert to HSV, necessary to use inRange()
Imgproc.cvtColor(matRgba, matRgba, Imgproc.COLOR_RGB2HSV);
// keep only the pixels defined by lower and upper bound range
Core.inRange(matRgba, lowerColorBound, upperColorBound, matRgba);
The required color model for the inRange(src, lowerb, upperb, dst) function in OpenCV is HSV.
The lowerb and upperb parameters specify the required lower and upper color bounds in the HSV format. In OpenCV, for HSV, Hue range is [0,179], Saturation range is [0,255] and Value range is [0,255].
For object tracking applications a possible practice (as suggested in the official documentation) to define these two color bounds can be:
Start from a color to track in RGB format.
Convert the color to the HSV format. Let (H, S, V) be its value.
Assign the value (H - deltaH, minS, minV) to lowerb and the value (H - deltaH, maxS, maxV) to upperb.
Possible starting values for the parameters defined in step 3 can be:
deltaH = 10
minS = 100, minV = 100
maxS = 255, maxV = 255
Then you can adjust them to narrow down or enlarge the H, S, V intervals as needed.

Defining grey values in Android

I am trying to define specific gray values in an image. In my application, I create empty bitmaps and generate different patterns on it afterwards(e.g. sinusodial 2D patterns with gray values from 0 - 255 or 0 -1).
In my pevious research I could only find this line of code, that was supposed to solve my poblem:
myBitmap.setPixel(x, y, Color.rgb(45, 127, 0));
But this only tells me how to work with colors, not with gray values.
Does anyone have an idea?
I don't know if there is a way to give an Android bitmap a backing store where each pixel is represented by one byte, but you certainly can set a pixel to gray by setting red, green, and blue values to the same:
int gray = 127; // 0-255
myBitmap.setPixel(x, y, Color.rgb(gray, gray, gray));

Why isn't inRange function detecting blue color when I have given it the entire possible Hue range for the blue color?

On the website colorizer.org, they have an HSV range of H=0-360, S=0-100, V=0-100. We are also aware that the HSV range in OpenCV is H=0-180, S=0-255, V=0-255.
I wanted to select a range for any shade of (what we perceive as) blue color, so I looked at colorizer.org, and saw that blue Hue ranges roughly from 170 to 270. So I scaled this Hue range to OpenCV by dividing by 2, which gives 85-135.
Now, I took the following screenshot of color [H=216, S=96, V=67] from the preview at the website
Then I run the app on my phone and captured the following camera frame from the laptop screen. I understand that the HSV channel values will differ from those in website to some extent because there are other conditions like additional light (V in HSV) in the room when I captured the camera frame, etc.
Then I converted this Mat to HSV color space by Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_RGB2HSV_FULL);, which resulted in the following image.
Then I called the inRange function:
Core.inRange(hsvImage, new Scalar(85, 50, 40), new Scalar(135, 255, 255), maskedImage);
which resulted in the following maskedImage.
The question is that why isn't it detecting the blue color when I have included all the Hue Range possible for blue color really?
IMPORTANT: Except the first original image, all the images were stored in sdcard using Highgui.imwrite function, so that I could move them to my computer in order to upload them on Stackoverflow. You must have noticed that the blue color in the first original screenshot is converted to red color in the second image. The reason is that the frame captured by the camera (that is the photo/frame of the first screenshot captured by the mobile phone camera) is an RGBA image. But OpenCV converts all images to BRG by default when it saves them to sdcard of something. So be assured that the original image is RGBA, and it is only converted to BGR internally by OpenCV for saving into sdcard. That's why red appears blue.
using this code does work for me (C++):
cv::Mat input = cv::imread("../inputData/HSV_RGB.jpg");
//assuming your image to be in RGB format after loading:
cv::Mat hsv;
cv::cvtColor(input,hsv,CV_RGB2HSV);
// hue range:
cv::Mat mask;
inRange(hsv, cv::Scalar(85, 50, 40), cv::Scalar(135, 255, 255), mask);
cv::imshow("blue mask", mask);
I used this input image (saved and loaded in BGR format although it in fact is a RGB image, that's why we have to use RGB2HSV instead of BGR2HSV):
resulting in this mask:
The difference to your code is that I used CV_RGB2HSV instead of CV_RGB2HSV_FULL. Flag CV_RGB2HSV_FULL uses the whole byte to store the hue values, so range 0 .. 360 degrees will be scaled to 0 .. 255 instead of 0 .. 180 as in CV_RGB2HSV
I could verify this by using this part of the code:
// use _FULL flag:
cv::cvtColor(input,hsv,CV_RGB2HSV_FULL);
// but scale the hue values accordingly:
double hueScale = 2.0/1.41176470588;
cv::Mat mask;
// scale hue values:
inRange(hsv, cv::Scalar(hueScale*85, 50, 40), cv::Scalar(hueScale*135, 255, 255), mask);
giving this result:
For anyone who wants to test with the "right" image:
Here's the input converted to BGR: If you want to use that directly you have to switch conversion from RGB2HSV to BGR2HSV. But I thought it would be better to show the BGR version of the input, too...

iOS and Android Algorithm or library for feathering edges of the images similar to photoshop's

I am looking for iOS and Android library for (preferably) or algorithm that would help me to feather edges of the image in similar way how it is handled in Photoshop. The illustration below shows the desired effect of the algorithm. I am not interested feathering bounds of the image, just alpha edges. I have been searching for algorithm that can accomplish it for few days without luck. Any help will be appreciated.
Assuming that you have alpha channel (like on photo with transparent background) it seems that regular convultion blur matrix should satisfy you.
However instead of going through RGB channels - you should go through ALPHA channel only.
Check the blur filter here:
https://en.wikipedia.org/wiki/Kernel_%28image_processing%29
You are interested in box blur/gaussian blur. However to make this effect more smooth - you should use matrix of bigger size.
The reason that algorithm will satisfy your needs is that if all surrounding pixels have alpha 0 - it will be still 0. If 255 - it will stay 255. Just pixels in area of border between alpha 0/255 will be affected.
Edit:
Please check this fiddle with chrome (in ff that's really slow):
http://jsfiddle.net/5L40ms65/
You can take a look into algorithm in the end of code. Since implementation i noted that:
- no need to blur if all neigbour pixels are 255 or 0 (alpha channel)
- it is required to blur also RGB in other case
In general:
RADIUS = 2 (makes total width of matrix = 5)
For x = 0..width
for y = 0..width
if all pixels in square of radius 2 are alpha = 0
do nothing
elsif all pixels in square have alpha = 255
do nothing
else
pixel[x][y].RGB = average RGB of adjacent pixels where alpha != 0
pixel[x][y].ALPHA = average ALPHA in square
Example result with radius=2
Of course this is rather concept program, there is a lot of place for memoization and tuning this script however it should make a big picture clear
You could change the alpha of your border based on the background color of the view.
var r:Float!
var g:Float!
var b:Float!
var a:Float!
if self.view.backgroundColor.getRed(red:r, green:g, blue:b, alpha:a) {
var imgv = UIImageView(frame: CGRect(x:100, y:100, width:100, height:100))
imgv.image = UIImage(named:"name")
imgv.layer.borderWidth = 2.0
imgv.layer.borderColor = UIColor(red:r, green:g, blue:b, alpha:0.5).CGColor
imgv.layer.cornerRadius = imgv.frame.size.width / 2
imgv.clipsToBounds = true
}else{
//Could not get the RGB of background color
}
You may see errors in this code because I have not yet tested it.

RGB value to HSL converter

Google maps api v3 allows "styles" to be applied to the map, including setting the color of various features. However, the color format it uses is HSL (or what seems like it):
hue (an RGB hex string)
lightness (a floating point value between -100 and 100)
saturation (a floating point value between -100 and 100)
(from the docs)
I managed to find RGB to HSL converters online, but I am unsure how to specify the converted values in a way that google maps will accept. For instance, a typical HSL value given by a converter would be: 209° 72% 49%
How does that HSL value map to the parameters I specified from the google maps api? i.e. how does a hue degree value map to an RGB hex string and how does a percentage map to a floating point value between -100 and 100?
I am still uncertain how to do the conversion. I need to, given an RGB value, quickly convert it to what google maps expects so that the color will be identical...
Since the hue argument expects RGB, you can use the original color as the hue.
rgb2hsl.py:
#!/usr/bin/env python
def rgb2hsl(r, g, b):
#Hue: the RGB string
H = (r<<16) + (g<<8) + b
H = "0x%06X" % H
#convert to [0 - 1] range
r = float(r) / 0xFF
g = float(g) / 0xFF
b = float(b) / 0xFF
#http://en.wikipedia.org/wiki/HSL_and_HSV#Lightness
M = max(r,g,b)
m = min(r,g,b)
C = M - m
#Lightness
L = (M + m) / 2
#Saturation (HSL)
if L == 0:
S = 0
elif L <= .5:
S = C/(2*L)
else:
S = C/(2 - 2*L)
#gmaps wants values from -100 to 100
S = int(round(S * 200 - 100))
L = int(round(L * 200 - 100))
return (H, S, L)
def main(r, g, b):
r = int(r, base=16)
g = int(g, base=16)
b = int(b, base=16)
print rgb2hsl(r,g,b)
if __name__ == '__main__':
from sys import argv
main(*argv[1:])
Example:
$ ./rgb2hsl.py F0 FF FF
('0xF0FFFF', 100, 94)
Result:
Below is a screenshot showing the body set to a rgb background color (#2800E2 in this case), and a google map with styled road-geometry, using the values calculated as above ('0x2800E2', 100, -11).
It's pretty clear that google uses your styling to create around six different colors centered on the given color, with the outlines being closest to the input. I believe this is as close as it gets.
From experimentation with: http://gmaps-samples-v3.googlecode.com/svn/trunk/styledmaps/wizard/index.html
For water, gmaps subtracts a gamma of .5. To get the exact color you want, use the calculations above, and add that .5 gamma back.
like:
{
featureType: "water",
elementType: "geometry",
stylers: [
{ hue: "#2800e2" },
{ saturation: 100 },
{ lightness: -11 },
{ gamma: 0.5 },
]
}
We coded a tool which exactly does want you want. It takes hexadecimal RGB values and generates the needed HSL code. It comes with a preview and Google Map JavaScript API V3 code output. Enjoy ;D
http://googlemapscolorizr.stadtwerk.org/
From the linked page:
Note: while hue takes an HTML hex color value, it only uses this value to determine the basic color (its orientation around the color wheel), not its saturation or lightness, which are indicated separately as percentage changes. For example, the hue for pure green may be defined as "#00ff00" or "#000100" within the hue property and both hues will be identical. (Both values point to pure green in the HSL color model.) RGB hue values which consist of equal parts Red, Green and Blue — such as "#000000" (black) and "#FFFFFF" (white) and all the pure shades of grey — do not indicate a hue whatsoever, as none of those values indicate an orientation in the HSL coordinate space. To indicate black, white or grey, you must remove all saturation (set the value to -100) and adjust lightness instead.
At least as I read it, that means you need to convert your angle based on a color wheel. For example, let's assume 0 degrees is pure red, 120 degrees is pure blue and 240 degrees is pure green. You'd then take your angle, figure out which two primaries it falls between, and interpolate to determine how much of each primary to use. In theory you should probably use a quadratic interpolation -- but chances are that you can get by reasonably well with linear.
Using that, 90 degrees (for example) is 90/120 = 3/4ths of the way from red to blue, so your hex number for the hue would be 0x00010003 -- or any other number that had green set to 0, and a 1:3 ratio between red and blue.
I needed to match colors exactly. So I used the tool that #stadt.werk offers (http://googlemapscolorizr.stadtwerk.org/) to get close.
But then I ran into the problem explained by #bukzor where the Google Maps API creates variations on your shade, none of which seem to be exactly what I specified.
So I pulled up the map in a browser, took a screenshot of just the area with the two shades that weren't quite matching, opened it up in an image editor (pixlr.com, in my case), used the color-sucker tool to get the saturation and lightness for the shade, adjusted my saturation and/or lightness in the Google API call by 1, and repeated until I got something that seems to match perfectly.
It is possible, of course, that Google Maps API will do different things with the colors on different devices/browsers/etc., but so far, so good.
Tedious, yes, but it works.

Categories

Resources