I want to do exactly like the above image via code (android). But I'm confused about the algorithm to do that. All i know is:
With every pixel:
Convert RGB to HSL
???
Convert HSL back to RGB
Can anyone explain for me what to do in Step 2? Thanks so much.
ps: I can set saturation in android via ColorMatrix.setSaturation(0) but the result image not the same as Photoshop (Because the Hue and Lightness not changed?)
You have many options to desaturate an image.
Moreover please note that desaturating an image is not simply to make it B&W but for some applications you may think they're equivalent.
I updated this post with more details.
Average
This is the first thing a student can image to do to convert to gray scale (at least what I first thought in the past!) and it appears like desaturation:
level = (R + G + B) / 3
It doesn't produce a bad result, it's fast and easy to implement. But it has the big drawback that it doesn't match the way humans perceive luminosity.
Luminance
This second method (Luminance is sometimes called Luminosity, Luma or Intensity) is a better model of the way our eyes perceive brightness. It is based on the fact that cone density in eye is not uniform across colors. We perceive green much more strongly than red and red more strongly than blue.
Because we don't perceive all colors with the same intensity, the average method is inaccurate (at least it doesn't produce a result that looks natural). How to manage this? Simply use a weighted average:
level = R * 0.3 + G * 0.59 + B * 0.11
As you can imagine there are a lot of discussions about these values. Original ITU-R recommendation proposed this formula:
level = R * 0.2126 + G * 0.7152 + B * 0.0722
If I'm not wrong Photoshop uses this one for its simple desaturation function (yes, it's the unrounded version of the first one):
level = R * 0.299 + G * 0.587 + B * 0.114
I don't think we may note a lot of difference anyway reccomandation changed recently, take a look here on Wikipedia for more details about this formula.
Do you want more details? Read this article of Charles Poynton: The rehabilitation of gamma and his FAQ about this topic.
Desaturation
You have each pixel described with the RGB color model but saturation belongs to the HSL color model (in reality you can use both HSL or HSV models when working with saturation). Please read the link for more details about these models.
Desaturating an image consists following steps:
Convert each pixel from RGB to HSL (see this article if you need details).
Force the saturation to zero (this should be what setSaturation(0) does)
Convert it back to RGB (see this bookmark).
Let me introduce a big semplification on this process: you can desaturate a color finding the midpoint between the maximum of RGB and the minimum of RGB (lightness, do you remember that a color, in the RGB color space, is a point in the 3D space?). The (simplified) formula to get the desaturated image is:
level = (max(R, G, B) + min(R, G, B)) / 2
De-composition
A simpler form of desaturation, called sometimes local maximal decomposition simply picks the maximum value of each RGB triplet:
level = max(R, G, B);
As you can imagine you can use both local maximum or local minimum (I wrote local because it searches the minimum/maximum for each pixel).
Other methods
Do not forget that you can get a B&W image (then something that looks like a desaturated image) in a very fast way simply keeping one single channel from the RGB triplet (for example the green channel) and copying that value to all channels).
Sometimes Photoshops tutorials don't use its functions to desaturate an image (the Desaturate function and the Adjustment palette) but to achieve better results they add a layer with a uniform color (calculated with values from Luminance section) and the they merge that layer with the original image (search for a tutorial and repro that steps in your code).
Related
I am working on chroma key (green screen) filter for android using opengl; the only difference is I am trying to replace not only green background but any color passed by user. I have been able to replace the color but the problem is that it also replaces color from object where light intensity is very high.
Can anyone help me to reduce the light glare from texture so that my filter can work as expected?
Or any reference greenscreen filter which works perfectly.
Anything will be welcomed.
EDIT : I have added screenshot to explain the situation, Here I tried to replace red background with these clouds, it worked for all area excluding the one having glare of light in it. I can overcome this by increasing the tolerance value but then that will make it to replace some yellow pixels from the object.
Algorithm wise just matching RGB colors is going to be difficult.
The first thing to note is that in your case you are really just looking at some form of luminance-sensitive value, not a pure chroma value. Bright pixels will always have a strong response in R, G, and B channels, so simple thresholding isn't going to give a reliable workarond here.
If you extracted a luminance-independent chroma value (like a YUV encoding would do) then you could isolate "redness", "greenness", and "blueness" independent of the brightness of the color.
However, this is still going to have edge cases. For example, what happens if your bottle is red, or has a red label? To solve this you need some form of geometry feature extraction (e.g. sorbel edge detect over the image), and then color replace only within a specific identified region.
None of this is trivial - accurate object detection and recognition is very hard and still very much under active research (e.g. think Tesla cruise control trying to spot and avoid objects while driving).
Hi I am developing a camera application in that I have to do black and white image processing.I goggled and found only gray scale image processing. I want to convert my image into black and white like cam scanner.Also I tried with openCv but the result is not up to our expectation.If anybody solved this means please let me know. Thank you.
You will start with a grayvalue int[] or byte[] array with intensity values in the range [0, 255]. What you need is a threshold thres, so that all pixels with intensity below that threshold are set to black (0) and all pixels with intensity equal or above that threshold are set to white (255). For tetermining the optimal threshold the Otsu method is a well established approach. It is rather intuitive. Since the threshold will divide the pixels into two subsets you take that threshold value that minimizes the variance within the two subsets - which is the same as maximizing the variance between the two subsets. As you see from the Wikipedia Link, the calculation is rather simple, they also provide the Java code. I work with this too and it is rather efficient.
Issue partially resolved, leaving previous post & code here for reference, new issue (stated in title) after the strong text at the bottom
I am trying to use colour picking to identify objects in OpenGL on Android. My code works fine for the most part, all objects are assigned unique colours in a 0.00f, 0.00f, 0.00f format (Alpha is always 0), and when clicking on the objects they are identified (Most of the time) using glreadpixels and converting/comparing the values.
The problem only occurs when using certain colour values. For instance, if an object is given the colour 0.0f, 0.77f, 1.0f (RGB), it will not colour solidly. It will colour parts of the object 0.0,0.76,1.0 or 0.0,0.78.1.0. I thought it might be colour blending so I coloured every object in the scene this colour but the same thing happened, this also eliminates any lighting issues which I thought might be another cause (despite not implementing light explicitly to my knowledge). This issue occurs on a few colours, not just the one stated.
How can I tell the object or the renderer to colour these objects solidly exactly as stated, instead of a blend of the colours either side of it?
The colours are not coming through as stated, if a color of R:0.0f G:0.77f B1.0f is passed to glUniform4v & glDrawArrays, it is drawn (and read by glreadpixels) as R:0.0f G:0.78f B1.0f. This is not the only value with which this happens, this is just an example.
Any suggestions for a fix are appreciated
There are at least two aspects that can come in your way of getting exactly the expected color:
Dithering
Dithering for color output is enabled by default. Based on what I've seen, it doesn't typically seem to come into play (or at least not in a measurable way) if you're rendering to targets with 8 bits per component. But it's definitely very noticeable when you're rendering to a lower color depth, like RGB565.
The details of what exactly happens as the result of dithering appears to be very vendor dependent.
Dithering is enabled by default. For typical use, that's probably a good thing, because you only care about the visual appearance. And the whole idea of dithering is obviously to enhance the visual quality. But if you rely on getting controllable and predictable values for your output colors, like it's necessary in your picking case, you should always disable dithering:
glDisable(GL_DITHER);
Precision
As you're already aware based on your code, precision is a big concern here. You obviously can't expect to get exactly the same floating point value back as the one you originally specified for the color.
The primary loss of precision comes from the conversion of the color value to a normalized value in the color buffer. With 8 bits/component color depth, the precision of that value is 1.0/255.0. Which means that you should be fine with generating and comparing values with a precision of 0.01.
Another source of precision loss is the shader processing. Since you specify mediump for the precision in the shader code, which gives you at least about 10 bits of precision, that also looks like it should not be harmful.
One possibility is that you didn't actually get a configuration with 8-bit color components. This would also be consistent with the visual dithering effect. Say if you got a RGB565 surface, your observed precision starts to make sense.
For example, with RGB565, if you pass in 0.77 for the green component, the value is multiplied with 63 (2^6 - 1) during fixed point conversion, which gives 48.51. Now, the spec says:
Values are converted (by rounding to nearest) to a fixed-point value with m bits, where m is the number of bits allocated to the corresponding R, G, B, A, or depth buffer component.
The nearest value for 48.51 is 49. But if you lose any kind of precision somewhere on the way, it could very easily become 48.
Now, when these values are converted back to float while you read them back, they are divided by 63.0. If the value in the framebuffer was 48, the result is 0.762, which you would round to in your code 0.76. If it was 49, the result is 0.777, which rounds to 0.78.
So in short:
Be very careful about what kind of precision you can expect.
I think you might have an RGB565 framebuffer.
Also, using multiples of 0.01 for the values does not look like an ideal strategy because it does not line up with the representation in the framebuffer. I would use multiples of 2^b - 1, where b is the number of bits in the color components. Use those values when specifying colors, and apply the matching quantization when you compare the values you read back with the expected value.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to create an application to detect the shape of the objects like ( circle, square and rectangle only geometry shapes ) that should not be using Marker less or Edge based way to detect the shape in augmentation.
I have used the following things for this like gone through the procedures of the tutorial that are already existing there in the metaio sdk
1) Metaio : http://dev.metaio.com/sdk/tutorials/hello-world/
2) OpenCV : http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html#canny-detector
these are the thing i have tried to implement.
Geometry shapes:
1) Circle in realtime could be any circular object-->
2) Square in realtime could be any square object-->
3) Rectangle in realtime could be any rectangle object-->
How can i achieve this scenario of the augmentation.
Thanks in advance
Update: This StackOverflow post (with some nice sample pictures included) seems to have solved the circles detection-part of your problem at least. The reference of the excellent write-up he's pointing to can be found on this wiki page (only through the wayback machine unfortunately).
In case that new link doesn't hold either, here is the relevant section:
Detecting Images:
There are a few fiddly bits that need to taken care of to detect circles in an image. Before you process an image with cvHoughCircles - the function for circle detection, you may wish to first convert it into a gray image and smooth it. Following is the general procedure of the functions you need to use with examples of their usage.
Create Image
Supposing you have an initial image for processing called 'img', first you want to create an image variable called 'gray' with the same dimensions as img using cvCreateImage.
IplImage* gray = cvCreateImage( cvGetSize(img), 8, 1 );
// allocate a 1 channel byte image
CvMemStorage* storage = cvCreateMemStorage(0);
IplImage* cvCreateImage(CvSize size, int depth, int channels);
size: cvSize(width,height);
depth: pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16U,
IPL_DEPTH_16S, IPL_DEPTH_32S, IPL_DEPTH_32F, IPL_DEPTH_64F
channels: Number of channels per pixel. Can be 1, 2, 3 or 4. The channels
are interleaved. The usual data layout of a color image is
b0 g0 r0 b1 g1 r1 ...
Convert to Gray
Now you need to convert it to gray using cvCvtColor which converts between colour spaces.
cvCvtColor( img, gray, CV_BGR2GRAY );
cvCvtColor(src,dst,code); // src -> dst
code = CV_<X>2<Y>
<X>/<Y> = RGB, BGR, GRAY, HSV, YCrCb, XYZ, Lab, Luv, HLS
e.g.: CV_BGR2GRAY, CV_BGR2HSV, CV_BGR2Lab
Smooth Image
This is done so as to prevent a lot of false circles from being detected. You might need to play around with the last two parameters, noting that they need to multiply to an odd number.
cvSmooth( gray, gray, CV_GAUSSIAN, 9, 9 );
// smooth it, otherwise a lot of false circles may be detected
void cvSmooth( const CvArr* src, CvArr* dst,
int smoothtype=CV_GAUSSIAN,
int param1, int param2);
src
The source image.
dst
The destination image.
smoothtype
Type of the smoothing:
CV_BLUR_NO_SCALE (simple blur with no scaling) - summation over a pixel param1×param2 neighborhood. If the neighborhood size is not fixed, one may use cvIntegral function.
CV_BLUR (simple blur) - summation over a pixel param1×param2 neighborhood with subsequent scaling by 1/(param1•param2).
CV_GAUSSIAN (gaussian blur) - convolving image with param1×param2 Gaussian.
CV_MEDIAN (median blur) - finding median of param1×param1 neighborhood (i.e. the neighborhood is square).
CV_BILATERAL (bilateral filter) - applying bilateral 3x3 filtering with color sigma=param1 and space sigma=param2
param1
The first parameter of smoothing operation.
param2
The second parameter of smoothing operation.
In case of simple scaled/non-scaled and Gaussian blur if param2 is zero, it is set to param1
Detect using Hough Circle
The function cvHoughCircles is used to detect circles on the gray image. Again the last two parameters might need to be fiddled around with.
CvSeq* circles =
cvHoughCircles( gray, storage, CV_HOUGH_GRADIENT, 2, gray->height/4, 200, 100 );
CvSeq* cvHoughCircles( CvArr* image, void* circle_storage,
int method, double dp, double min_dist,
double param1=100, double param2=100,
int min_radius=0, int max_radius=0 );
======= End of relevant section =========
The rest of that wiki page is actually very good (although, I'm not going to recopy it here since the rest is off-topic to the original question and StackOverflow has a size limit for answers). Hopefully, that link to the cached copy on the Wayback machine will keep on working indefinitely.
Previous Answer Before my Update:
Great! Now that you posted some examples, I can see that you're not only after rectangles, square rectangles, and circles, you also want to find those shapes in a 3D environment, thus potentially hunting for special cases of parallelograms and ovals that from video frame to video frame can eventually reveal themselves to be rectangles, squares, and/or circles (depending on how you pan the camera).
Personally, I find it easier to work through a problem myself than trying to understand how to use an existing (often times very mature) library. This is not to say that my own work will be better than a mature library, it certainly won't be. It's just that once I can work myself through a problem, then it becomes easier for me to understand and use a library (the library itself which will often run much faster and smarter than my own solution).
So the next step I would take is to change the color space of the bitmap into grayscale. A color bitmap, I have trouble understanding and I have trouble manipulating, especially since there are so many different ways it can be represented, but a grayscale bitmap, that's both much easier to understand and manipulate. For a grayscale bitmap, just imagine a grid of values, with each value representing a different light intensity.
And for now, let's limit the scope of the problem to finding parallelograms and ovals inside a static 2D environment (we'll worry about processing 3D environments and moving video frames later, or should I say, you'll worry about that part yourself since that problem is already becoming too complicated for me).
And for now also, let's not worry about what tool or language you use. Just use whatever is easiest and most expeditive. For instance, just about anything can be scripted to automatically convert an image to grayscale assuming time is no issue. ImageMagick, Gimp, Marvin, Processing, Python, Ruby, Java, etc.
And with any of those tools, it should be easy to group pixels with similar enough intensities (to make the calculations more manageable) and to sort each pixel coordinates in a different array for each light intensity bucket. In other words, it shouldn't be too difficult to arrange some sort of crude histogram of arrays sorted by intensity that contain each pixel's x and y positions.
After that, the problem becomes a problem more like this one (which can be found on StackOverflow) and thus can be worked upon with its suggested solution.
And once you're able to work through the problem in that way, then converting the solution you come up with to a better language suited for the task shouldn't be too difficult. And it should be much easier also to understand and use the underlying function of any existing library you end choosing for the task as well. At least, that's what I'm hoping for, since I'm not familiar enough and I can't really help you with the OpenCV libraries themselves.
I am building an android app which provides the user with some image processing functionalities. But before applying any image transformation function I would like to do gamma correction to improve the image. I know how to perform the gamma correction but I don't know what gamma value to use as the image itself doesn't have the gamma value with which the image was created. Any information regarding how to select a gamma value for a particular image will be very helpful.
It appears that what you really want is to lighten or darken the average brightness of an image to match some optimum value. Yes, the gamma function can do that. It might not be the best choice, in fact for under or over exposure a simple linear multiplication might be better. But let's stick with gamma for now.
Measure the average brightness of the image and call it a, with values from 0-255. You have a target for the optimum brightness, let's call that t. If the unknown gamma is g then you get:
t/255 = (a/255)^g
Solving for g gives:
g = log(t/255) / log(a/255)