iOS and Android Algorithm or library for feathering edges of the images similar to photoshop's - android

I am looking for iOS and Android library for (preferably) or algorithm that would help me to feather edges of the image in similar way how it is handled in Photoshop. The illustration below shows the desired effect of the algorithm. I am not interested feathering bounds of the image, just alpha edges. I have been searching for algorithm that can accomplish it for few days without luck. Any help will be appreciated.

Assuming that you have alpha channel (like on photo with transparent background) it seems that regular convultion blur matrix should satisfy you.
However instead of going through RGB channels - you should go through ALPHA channel only.
Check the blur filter here:
https://en.wikipedia.org/wiki/Kernel_%28image_processing%29
You are interested in box blur/gaussian blur. However to make this effect more smooth - you should use matrix of bigger size.
The reason that algorithm will satisfy your needs is that if all surrounding pixels have alpha 0 - it will be still 0. If 255 - it will stay 255. Just pixels in area of border between alpha 0/255 will be affected.
Edit:
Please check this fiddle with chrome (in ff that's really slow):
http://jsfiddle.net/5L40ms65/
You can take a look into algorithm in the end of code. Since implementation i noted that:
- no need to blur if all neigbour pixels are 255 or 0 (alpha channel)
- it is required to blur also RGB in other case
In general:
RADIUS = 2 (makes total width of matrix = 5)
For x = 0..width
for y = 0..width
if all pixels in square of radius 2 are alpha = 0
do nothing
elsif all pixels in square have alpha = 255
do nothing
else
pixel[x][y].RGB = average RGB of adjacent pixels where alpha != 0
pixel[x][y].ALPHA = average ALPHA in square
Example result with radius=2
Of course this is rather concept program, there is a lot of place for memoization and tuning this script however it should make a big picture clear

You could change the alpha of your border based on the background color of the view.
var r:Float!
var g:Float!
var b:Float!
var a:Float!
if self.view.backgroundColor.getRed(red:r, green:g, blue:b, alpha:a) {
var imgv = UIImageView(frame: CGRect(x:100, y:100, width:100, height:100))
imgv.image = UIImage(named:"name")
imgv.layer.borderWidth = 2.0
imgv.layer.borderColor = UIColor(red:r, green:g, blue:b, alpha:0.5).CGColor
imgv.layer.cornerRadius = imgv.frame.size.width / 2
imgv.clipsToBounds = true
}else{
//Could not get the RGB of background color
}
You may see errors in this code because I have not yet tested it.

Related

How to get to edge pixel of an image

I'm trying to get the background color of image that I download with glide.
I think(please correct me if I'm wrong) that the best way is to get the edge pixel(top - left most, bottom - right most etc, my images are usually center image with solid background color)
I'm getting specific pixel color im my recycler view like this(based on the accepted answer here: How to Get Pixel Color in Android):
val bitmap = (binding.image.drawable as BitmapDrawable).bitmap
val pixel = bitmap.getPixel(x,y)
My question is how can I get the edge pixel of the image so I can determine the color of the background?
If you have the bitmap, you can use getWidth() and getHeight() (or just bitmap.width and bitmap.height in Kotlin) to get the X and Y coordinates of the far edges (it starts at 0 so it will be height - 1 etc).
Then you can plug those into getColor
with(bitmap) {
// origin (0,0) is the top left corner
val topLeft = getColor(0, 0)
val bottomRight = getColor(width-1, height-1)
....
}
(I used with to avoid going bitmap.whatever over and over)
This might not be the best way to get the actual background colour, it really depends (what if the image has a thin border?) and this could be a tricky problem! Just in case it helps, Android has the Palette library which lets you generate a bunch of colour swatches from a Bitmap, so that might be useful if you want to kind of pull out the main colours from an image and pick one

OpenCV different approach on detecting go board

i am working on an Android app that will recognize a GO board and create a SGF file of it.
i made a version that is able to detect a board and warp the perspective to make it square ( code and example image below) unfortunately it gets a bit harder when adding stones.(image below)
Important things about a average go board:
round black and white stones
black lines on the board
board color ranges from white to light brown and sometimes with a wood grain
stones are placed on intersections of two lines
correct me if i am wrong but i think my current approach is not a good one.
Has somebody a general idea on how i can separate the stones and lines from the rest of the picture?
My code:
Mat input = inputFrame.rgba(); //original image
Mat gray = new Mat(); //grayscale image
//convert image to grayscale
Imgproc.cvtColor( input, gray, Imgproc.COLOR_RGB2GRAY);
//try to improve histogram (more contrast)
equalizeHist(gray, gray);
//blur image
Size s = new Size(5,5);
GaussianBlur(gray, gray, s, 0);
//apply adaptive treshold
adaptiveThreshold( gray, gray, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY,11,2);
//adding secondary treshold, removes a lot of noise
threshold(gray, gray, 0, 255, Imgproc.THRESH_BINARY + Imgproc.THRESH_OTSU);
Some images:
(source: eightytwo.axc.nl)
(source: eightytwo.axc.nl)
EDIT: 05-03-2016
Yay! managed to detect lines stones and color correctly. precondition the picture has to be only the board itself, without any other background visible.
I use houghLinesP (60lines) and houghCircles (17circles), duration on my phone(1th gen Moto G) about 5 seconds.
Detecting board and warp it turns out to be quite a challenge when it has to be working under different angles and lightning conditions.. still working on that
Suggestions for different approaches are still welcome!!
(source: eightytwo.axc.nl)
EDIT: 15-03-2016
i found a nice way to get line intersects with cross type morphological transformations, works amazing when the picture is taken directly above the board unfortunately not while at an angle (see below)
(source: eightytwo.axc.nl)
In my last update i showed line and stone detection with a picture taken from directly above since then i have been working on detecting the board and warping it in a way that my line and stone detection becomes useful.
harris corner detection
I struggled to get the right parameter settings and i am still not sure if they are optimal, can't find much information on how to optimize image before using harris corners. right now it detects to many corners to be useful. though it feels like it could work. (upper line with pictures in example)
Mat corners = new Mat();
Imgproc.cornerHarris(image, corners, 5, 3, 0.03);
Mat mask = new Mat(corners.size(), CvType.CV_8U, new Scalar(1));
Core.MinMaxLocResult maxVal = Core.minMaxLoc(corners);
Core.inRange(corners, new Scalar(maxVal.maxVal * 0.01), new Scalar(maxVal.maxVal), mask);
cross type morphological transformations
works great when picture is taken directly from above, used from an angle or with a rotated board does not work (middle line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
int morph_elem = 1; //0: Rect - 1: Cross - 2: Ellipse
int morph_size = 5;
int morph_operator = 0; //0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat
Mat element = getStructuringElement( morph_elem, new Size(2 * morph_size + 1, 2 * morph_size + 1), new Point( morph_size, morph_size ));
morphologyEx(image, image, morph_operator + 2, element);
contour and houghlines
if there are no stones on the outer boardline and light conditions not to harsh it works pretty well. contours are only part of the board quite often(lower line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
Mat hierarchy = new Mat();
MatOfPoint biggest = null;
int contourId = 0;
double biggestArea = 0;
double minSize = 2000;
List<MatOfPoint> contours = new ArrayList<>();
findContours(InvertedImage, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
//find biggest
for( int x = 0; x < contours.size() ; x++ ){
double area = Imgproc.contourArea(contours.get(x));
if( area > minSize && area > biggestArea ){
biggestArea = area;
biggest = contours.get(x);
contourId = x;
}
}
providing the right picture all three the methods work but not good enough to be reliable. any thoughts on parameters, image pre-processing, different approaches or anything that might improve the detection are welcome=)
link to picture
EDIT: 31-03-2016
detecting lines and stones is pretty much solved so i will close this question. created a new one for detecting and warping accurately.
anybody interested in my progress: this is my GOSU Snap Alpha channel don't expect to much of it right now!
EDIT: 16-10-2016
Update: i saw some people are still following this question.
I tested some more stuff and started using Tensorflow, my neural network looks promising, you can have a look at it here.
A lot of work has to be done still, my current image dataset is awful and right now i am working on getting a big dataset.
the app works best using a square board with thick lines and decent lightning.
Assuming that you don't want to "force" your end user to take a cleanest pictures (like using an overlay like some of the QR code scanner for example)
Perhaps you could use some morphological transformations with differents kernels :
Opening and closing with a rectangular kernel for the lines
Opening and closing with an ellipse kernel to get the stones (it should be possible to invert the image at some point to get back the white or the black one)
Take a look at http://docs.opencv.org/2.4/doc/tutorials/imgproc/opening_closing_hats/opening_closing_hats.html (sorry this one is in C++ but I think this is almost the same in Java)
I had try these operations to remove a grid from a Sudoku to avoid noise in cell extraction and it worked like a charm.
Let me know of these informations were usefull for you (this is for sure a very interesting case)
I'm working on same program. I avoid finding lines at all.
First use perspective transform to get the board into a square as you have done. Find the edges of the 19x19 grid. Then assuming the board is 19x19 you can just compute the position of the lines. This works well for me. Then you compute the closest intersection of the center of the stone to determine which row and col line the stone is on. Works pretty well for me. Only probably is calibrating program for different lighting conditions and different color stones and boards.

OpenCV Android Green Color Detection

currently I'm making an app where user will detect green colors. I use this photo for testing:
My problem is that I can not detect any green pixel. Before I worked with blue color and everything worked fine. Now I can't detect anything though I tried different combinations of RGB. I wanted to know whether it's problem with green or my detection range, so I made an image in paint using (0, 255, 0) and it worked. Why it can't see this circle then? I use this code for detection:
Core.inRange(hsv_image, new Scalar([I change this value]), new Scalar(60, 255, 255), ultimate_blue);
It could have been that I set wrong Range, but I use Photoshop to get color of one of green pixels and convert RGB value of it into HSV. Yet it doesn't work. It don't detect even pixel that I've sampled. What's wrong? Thanks in advance.
Using Miki's answer:
Green color is HSV space has H = 120 and it's in range [0, 360].
OpenCV halves the H values to fit the range [0,255], so H value instead of being in range [0, 360], is in range [0, 180].
S and V are still in range [0, 255].
As a consequence, the value of H for green is 60 = 120 / 2.
You upper and lower bound should be:
// sensitivity is a int, typically set to 15 - 20
[60 - sensitivity, 100, 100]
[60 + sensitivity, 255, 255]
UPDATE
Since your image is quite dark, you need to use a lower bound for V. With these values:
sensitivity = 15;
[60 - sensitivity, 100, 50] // lower bound
[60 + sensitivity, 255, 255] // upper bound
the resulting mask would be like:
You can refer to this answer for the details.

How to blur some portion of Image in Android?

I am working in a project where I have to show some portion of the image clear and make rest part of the image blur. The blur should be managed by slider. Means it can be increase or decrease. The final result image should look alike below.
During my research for this I found below links useful
http://blog.neteril.org/blog/2013/08/12/blurring-images-on-android/
https://github.com/kikoso/android-stackblur
http://blog.neteril.org/blog/2013/08/12/blurring-images-on-android/
But the issue in above links is they all make complete image blur. Not some part of image.
Kindly suggest some solution to achieve this. Thanks in advance.
do a masked blur few times ....
create mask
0 means blur (black) and >=1 means not blur (white). Init this part by big enough value for example w=100 pixels
create masked blur function
just a common convolution with some matrix like
0.0 0.1 0.0
0.1 0.6 0.1
0.0 0.1 0.0
but do it only for target pixels where mask is ==0 after image is blurred blur also the mask. This should enlarge the white area a bit (by pixel per iteration but losing magnitude on borders that is why w>1).
loop bullet #2 N times
N determines blur/non-blur gradient depth the w is only to assure that burred mask will grow... Each time the blur mask will increase its white part
That should do the trick, You can also use dilatation of the mask instead of blurring it.
[edit1] implementation
Have played with this a bit today and found out that the mask is not growing enough with smooth so I change the algo a bit (here mine code C++):
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
// pic2 - mask
int x0=400,y0=330,r0=100,dr=200;
// x0,y0,r0 - masked area
// dr - blur gradient size
int i,r;
// init output as sourceimage
pic1=pic0;
// init mask (size of source image) with gradient circles
pic2.resize(pic0.xs,pic0.ys);
pic2.clear(0);
for (i=1;i<=255;i++)
{
r=r0+dr-((dr*i)>>8);
pic2.bmp->Canvas->Brush->Color=TColor(i<<16); // shifted because GDI has inverse channel layout then direct pixel access
pic2.bmp->Canvas->Pen ->Color=TColor(i<<16);
pic2.bmp->Canvas->Ellipse(x0-r,y0-r,x0+r,y0+r);
}
for (i=1;i<255;i+=10) pic1.rgb_smooth_masked(pic2,i);
here the smooth function:
//---------------------------------------------------------------------------
void picture::rgb_smooth_masked(const picture &mask,DWORD treshold)
{
int i,x,y;
color *q0,*q1,*m0,c0,c1,c2;
if ((xs<2)||(ys<2)) return;
for (y=0;y<ys-1;y++)
{
q0=p[y ]; m0=mask.p[y];
q1=p[y+1];
for (x=0;x<xs-1;x++)
if (m0[x].dd<treshold)
{
c0=q0[x];
c1=q0[x+1];
c2=q1[x];
for (i=0;i<4;i++)
q0[x].db[i]=DWORD((DWORD(c0.db[i])+DWORD(c0.db[i])+DWORD(c1.db[i])+DWORD(c2.db[i]))>>2);
}
}
}
//---------------------------------------------------------------------------
create gradient mask with circles increasing in color from 1 to 255
rest is black the gradient width is dr and determine the smoothing sharpness.
create smooth masked with mask and threshold
smooth all pixels where mask pixel is < threshold. See the function rgb_smooth_masked. It uses 2x2 convolution matrix
0.50,0.25
0.25,0.00
loop threshold from 1 to 255 by some step
the step determines the image blur strength.
And finally here some visual results this is source image I taken with my camera:
And here the output on the left and mask on the right:
the blue color means values < 256 (B is lowest 8 bits of color)
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution

RGB value to HSL converter

Google maps api v3 allows "styles" to be applied to the map, including setting the color of various features. However, the color format it uses is HSL (or what seems like it):
hue (an RGB hex string)
lightness (a floating point value between -100 and 100)
saturation (a floating point value between -100 and 100)
(from the docs)
I managed to find RGB to HSL converters online, but I am unsure how to specify the converted values in a way that google maps will accept. For instance, a typical HSL value given by a converter would be: 209° 72% 49%
How does that HSL value map to the parameters I specified from the google maps api? i.e. how does a hue degree value map to an RGB hex string and how does a percentage map to a floating point value between -100 and 100?
I am still uncertain how to do the conversion. I need to, given an RGB value, quickly convert it to what google maps expects so that the color will be identical...
Since the hue argument expects RGB, you can use the original color as the hue.
rgb2hsl.py:
#!/usr/bin/env python
def rgb2hsl(r, g, b):
#Hue: the RGB string
H = (r<<16) + (g<<8) + b
H = "0x%06X" % H
#convert to [0 - 1] range
r = float(r) / 0xFF
g = float(g) / 0xFF
b = float(b) / 0xFF
#http://en.wikipedia.org/wiki/HSL_and_HSV#Lightness
M = max(r,g,b)
m = min(r,g,b)
C = M - m
#Lightness
L = (M + m) / 2
#Saturation (HSL)
if L == 0:
S = 0
elif L <= .5:
S = C/(2*L)
else:
S = C/(2 - 2*L)
#gmaps wants values from -100 to 100
S = int(round(S * 200 - 100))
L = int(round(L * 200 - 100))
return (H, S, L)
def main(r, g, b):
r = int(r, base=16)
g = int(g, base=16)
b = int(b, base=16)
print rgb2hsl(r,g,b)
if __name__ == '__main__':
from sys import argv
main(*argv[1:])
Example:
$ ./rgb2hsl.py F0 FF FF
('0xF0FFFF', 100, 94)
Result:
Below is a screenshot showing the body set to a rgb background color (#2800E2 in this case), and a google map with styled road-geometry, using the values calculated as above ('0x2800E2', 100, -11).
It's pretty clear that google uses your styling to create around six different colors centered on the given color, with the outlines being closest to the input. I believe this is as close as it gets.
From experimentation with: http://gmaps-samples-v3.googlecode.com/svn/trunk/styledmaps/wizard/index.html
For water, gmaps subtracts a gamma of .5. To get the exact color you want, use the calculations above, and add that .5 gamma back.
like:
{
featureType: "water",
elementType: "geometry",
stylers: [
{ hue: "#2800e2" },
{ saturation: 100 },
{ lightness: -11 },
{ gamma: 0.5 },
]
}
We coded a tool which exactly does want you want. It takes hexadecimal RGB values and generates the needed HSL code. It comes with a preview and Google Map JavaScript API V3 code output. Enjoy ;D
http://googlemapscolorizr.stadtwerk.org/
From the linked page:
Note: while hue takes an HTML hex color value, it only uses this value to determine the basic color (its orientation around the color wheel), not its saturation or lightness, which are indicated separately as percentage changes. For example, the hue for pure green may be defined as "#00ff00" or "#000100" within the hue property and both hues will be identical. (Both values point to pure green in the HSL color model.) RGB hue values which consist of equal parts Red, Green and Blue — such as "#000000" (black) and "#FFFFFF" (white) and all the pure shades of grey — do not indicate a hue whatsoever, as none of those values indicate an orientation in the HSL coordinate space. To indicate black, white or grey, you must remove all saturation (set the value to -100) and adjust lightness instead.
At least as I read it, that means you need to convert your angle based on a color wheel. For example, let's assume 0 degrees is pure red, 120 degrees is pure blue and 240 degrees is pure green. You'd then take your angle, figure out which two primaries it falls between, and interpolate to determine how much of each primary to use. In theory you should probably use a quadratic interpolation -- but chances are that you can get by reasonably well with linear.
Using that, 90 degrees (for example) is 90/120 = 3/4ths of the way from red to blue, so your hex number for the hue would be 0x00010003 -- or any other number that had green set to 0, and a 1:3 ratio between red and blue.
I needed to match colors exactly. So I used the tool that #stadt.werk offers (http://googlemapscolorizr.stadtwerk.org/) to get close.
But then I ran into the problem explained by #bukzor where the Google Maps API creates variations on your shade, none of which seem to be exactly what I specified.
So I pulled up the map in a browser, took a screenshot of just the area with the two shades that weren't quite matching, opened it up in an image editor (pixlr.com, in my case), used the color-sucker tool to get the saturation and lightness for the shade, adjusted my saturation and/or lightness in the Google API call by 1, and repeated until I got something that seems to match perfectly.
It is possible, of course, that Google Maps API will do different things with the colors on different devices/browsers/etc., but so far, so good.
Tedious, yes, but it works.

Categories

Resources