I am working in a project where I have to show some portion of the image clear and make rest part of the image blur. The blur should be managed by slider. Means it can be increase or decrease. The final result image should look alike below.
During my research for this I found below links useful
http://blog.neteril.org/blog/2013/08/12/blurring-images-on-android/
https://github.com/kikoso/android-stackblur
http://blog.neteril.org/blog/2013/08/12/blurring-images-on-android/
But the issue in above links is they all make complete image blur. Not some part of image.
Kindly suggest some solution to achieve this. Thanks in advance.
do a masked blur few times ....
create mask
0 means blur (black) and >=1 means not blur (white). Init this part by big enough value for example w=100 pixels
create masked blur function
just a common convolution with some matrix like
0.0 0.1 0.0
0.1 0.6 0.1
0.0 0.1 0.0
but do it only for target pixels where mask is ==0 after image is blurred blur also the mask. This should enlarge the white area a bit (by pixel per iteration but losing magnitude on borders that is why w>1).
loop bullet #2 N times
N determines blur/non-blur gradient depth the w is only to assure that burred mask will grow... Each time the blur mask will increase its white part
That should do the trick, You can also use dilatation of the mask instead of blurring it.
[edit1] implementation
Have played with this a bit today and found out that the mask is not growing enough with smooth so I change the algo a bit (here mine code C++):
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
// pic2 - mask
int x0=400,y0=330,r0=100,dr=200;
// x0,y0,r0 - masked area
// dr - blur gradient size
int i,r;
// init output as sourceimage
pic1=pic0;
// init mask (size of source image) with gradient circles
pic2.resize(pic0.xs,pic0.ys);
pic2.clear(0);
for (i=1;i<=255;i++)
{
r=r0+dr-((dr*i)>>8);
pic2.bmp->Canvas->Brush->Color=TColor(i<<16); // shifted because GDI has inverse channel layout then direct pixel access
pic2.bmp->Canvas->Pen ->Color=TColor(i<<16);
pic2.bmp->Canvas->Ellipse(x0-r,y0-r,x0+r,y0+r);
}
for (i=1;i<255;i+=10) pic1.rgb_smooth_masked(pic2,i);
here the smooth function:
//---------------------------------------------------------------------------
void picture::rgb_smooth_masked(const picture &mask,DWORD treshold)
{
int i,x,y;
color *q0,*q1,*m0,c0,c1,c2;
if ((xs<2)||(ys<2)) return;
for (y=0;y<ys-1;y++)
{
q0=p[y ]; m0=mask.p[y];
q1=p[y+1];
for (x=0;x<xs-1;x++)
if (m0[x].dd<treshold)
{
c0=q0[x];
c1=q0[x+1];
c2=q1[x];
for (i=0;i<4;i++)
q0[x].db[i]=DWORD((DWORD(c0.db[i])+DWORD(c0.db[i])+DWORD(c1.db[i])+DWORD(c2.db[i]))>>2);
}
}
}
//---------------------------------------------------------------------------
create gradient mask with circles increasing in color from 1 to 255
rest is black the gradient width is dr and determine the smoothing sharpness.
create smooth masked with mask and threshold
smooth all pixels where mask pixel is < threshold. See the function rgb_smooth_masked. It uses 2x2 convolution matrix
0.50,0.25
0.25,0.00
loop threshold from 1 to 255 by some step
the step determines the image blur strength.
And finally here some visual results this is source image I taken with my camera:
And here the output on the left and mask on the right:
the blue color means values < 256 (B is lowest 8 bits of color)
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
Related
I'm trying to get the background color of image that I download with glide.
I think(please correct me if I'm wrong) that the best way is to get the edge pixel(top - left most, bottom - right most etc, my images are usually center image with solid background color)
I'm getting specific pixel color im my recycler view like this(based on the accepted answer here: How to Get Pixel Color in Android):
val bitmap = (binding.image.drawable as BitmapDrawable).bitmap
val pixel = bitmap.getPixel(x,y)
My question is how can I get the edge pixel of the image so I can determine the color of the background?
If you have the bitmap, you can use getWidth() and getHeight() (or just bitmap.width and bitmap.height in Kotlin) to get the X and Y coordinates of the far edges (it starts at 0 so it will be height - 1 etc).
Then you can plug those into getColor
with(bitmap) {
// origin (0,0) is the top left corner
val topLeft = getColor(0, 0)
val bottomRight = getColor(width-1, height-1)
....
}
(I used with to avoid going bitmap.whatever over and over)
This might not be the best way to get the actual background colour, it really depends (what if the image has a thin border?) and this could be a tricky problem! Just in case it helps, Android has the Palette library which lets you generate a bunch of colour swatches from a Bitmap, so that might be useful if you want to kind of pull out the main colours from an image and pick one
I am looking for iOS and Android library for (preferably) or algorithm that would help me to feather edges of the image in similar way how it is handled in Photoshop. The illustration below shows the desired effect of the algorithm. I am not interested feathering bounds of the image, just alpha edges. I have been searching for algorithm that can accomplish it for few days without luck. Any help will be appreciated.
Assuming that you have alpha channel (like on photo with transparent background) it seems that regular convultion blur matrix should satisfy you.
However instead of going through RGB channels - you should go through ALPHA channel only.
Check the blur filter here:
https://en.wikipedia.org/wiki/Kernel_%28image_processing%29
You are interested in box blur/gaussian blur. However to make this effect more smooth - you should use matrix of bigger size.
The reason that algorithm will satisfy your needs is that if all surrounding pixels have alpha 0 - it will be still 0. If 255 - it will stay 255. Just pixels in area of border between alpha 0/255 will be affected.
Edit:
Please check this fiddle with chrome (in ff that's really slow):
http://jsfiddle.net/5L40ms65/
You can take a look into algorithm in the end of code. Since implementation i noted that:
- no need to blur if all neigbour pixels are 255 or 0 (alpha channel)
- it is required to blur also RGB in other case
In general:
RADIUS = 2 (makes total width of matrix = 5)
For x = 0..width
for y = 0..width
if all pixels in square of radius 2 are alpha = 0
do nothing
elsif all pixels in square have alpha = 255
do nothing
else
pixel[x][y].RGB = average RGB of adjacent pixels where alpha != 0
pixel[x][y].ALPHA = average ALPHA in square
Example result with radius=2
Of course this is rather concept program, there is a lot of place for memoization and tuning this script however it should make a big picture clear
You could change the alpha of your border based on the background color of the view.
var r:Float!
var g:Float!
var b:Float!
var a:Float!
if self.view.backgroundColor.getRed(red:r, green:g, blue:b, alpha:a) {
var imgv = UIImageView(frame: CGRect(x:100, y:100, width:100, height:100))
imgv.image = UIImage(named:"name")
imgv.layer.borderWidth = 2.0
imgv.layer.borderColor = UIColor(red:r, green:g, blue:b, alpha:0.5).CGColor
imgv.layer.cornerRadius = imgv.frame.size.width / 2
imgv.clipsToBounds = true
}else{
//Could not get the RGB of background color
}
You may see errors in this code because I have not yet tested it.
I try to visualize the gradiants and angles of an image which computed by the HOGDescriptor of the OpenCV Lib for Android. At the begin i have an 3 channel image Mat() with 8 bit unsigned int (CV_8UC3). The result of the computation is a MAT() (CV_32FC2) of the gradiants and a Mat() (CV_8UC2) of the angles. How can i visualize this results? What represent the values? Why have the angle Mat() 2 channels? Are the 2 channels of the gradiant Mat() the x and y component of the gradiant? I cant find documentation of the computeGradiant-Method.
HOG descriptor is an histogram of oriented gradient: it is an histogram where each bin reprezent the vote for gradient in corresponding orientation.
In order to compute this descriptor, you should first convert you 3 channels color image into a grayscale image
cv::cvtColor(CV_BGR2GRAY);
The result of "ComputeGradient" method is for exemple two images (same size as the original): x-component and y-component.
You should then be able to compute for each pixel the gradient magnitude and orientation.
mag=sqrt(x*x+y*y)
alpha=atan(y/x)
Then you can fill you histogram. Note that HOG descritpor is computed by blocks and cells. See this for more detail.
i want to use background image with texture but texture get images with power of 2 so i made image size (800x480 to 512x512).
but now it is showing image with some blank space.
how can i show image on entire screen. and also want to horizontally scrollable .
Well, if this is on Android, that means you don't have glDrawPixels, so the only way you could be "showing" an image would be to render a textured quad. So just make the quad whatever size you need.
You can also draw multiple textures, just by drawing one texture to one location, then drawing another texture to another location.
You need to use the right texture coordinates when rendering your background. You need to compute the texture coordinates so that you are having an offset for the axis perpendicular to the black borders.
I guess you resized your 800x480 image so that it fits in 512x512 pixels, making you end up with the actual background image (inside the 512x512 texture) being 512x307?
This would mean that your u texture coordinate offset would need to be (512.0 - 307.0) / 2.0 / 512.0 ~ 0.2, so your texture coordinates would need to be (0.0,0.0), (0.0,0.2), (1.0,0.2), (1.0,0.0).
I have the below image (the white bubble in the image) to draw in a canvas. When I draw the image using the code.., the image 's edge is getting black circle and rounded .. the edge's alpha is 0x00.
image.setBounds(left, top, right, bottom);
image.draw(canvas);
Expected When I draw
How could I remove the black circle??? Is the image wrong?? or Anyone know the clue, Please give me a clue.. Thanks in advance..
^^
Is your expected output taken from an image editor (Photoshop?) If so, that'll be the result of a 32-bit blend, whereas it looks like the alpha-blend on Android is being performed in 16-bits, hence the banding in the background, and halo around your image.
Presuming you're using Bitmap objects, you can check whether this is the case by calling bitmap.getConfig() to find their colour depth (from the Bitmap.Config enum).
Edit: One more thing that may be causing the halo - you say the edges of your sprite have an alpha of 0, but what about the RGB values? Make sure the ARGB is set to full-white (ARGB 0x00ffffff) rather than black (ARGB 0x00000000).