I try too calibrate my YUV conversion with the screen(24 pouces) of my PC. And i foud out that the YUV response is changing.
For example if a look at the screen the widows logo,meanly bleu i got the correct bleu YUV conversion.
But if i open another window with a lot of white the YUV conversion is very bad. There is nearly no bleu responding to the bleu part.
The same problem when i modify the angle of the camera. It is automatycly changing the YUV response.
If there is more or less white color on screen the color response (YUV) is automaticly modified.
It is possible to avoid this automatic change and if some one could explain why this happened.
thanks for help.
I have deleted the answer to said that it is nearly impossible to calibrate YUV to RGB. By setting exposureLock,whitebalancelock and exposurecompensation once at the begining change nothing to help calibration.
Conclusion. It loock nearly impossible to calibrate the RGB color value From YUV. Exposure and ligth change all the time and there are no info about the change.
If someone got the solution i take ;))
Related
I have been working on Touch less Bio-metrics. I want to extract Fingerprints from image captured by normal mobile camera. I have achieved a good image, but it is not good enough to be verified by government.
The lines need to be more thick and connected.
What I have tried so far?
Below are the steps which I took to extract a fingerprint from image. It is good, but lines are disconnected and joined with other.
Changed contrast and brightness to 0.8 and 25 respectively
Converted from RGB to Gray
Applied histogram equalization
Normalized image
Applied adaptive (gaussian c) threshold for block size of 15 and constant 2
Smooth image to get rid of edges
Changed contrast and brightness again to 1.7 and -40 respectively
Applied Gaussian Blur
Add weight (alpha = 0.5, beta = -0.5 and gamma = 0)
Applied binary threshold (threshold = 10)
Original Image would be like this (I missed the original image of processed image)
And the result is the image attached (processed image).
I need lines to be more connected and separated from other lines so that Ridge Ending and Ridge Bifurcation can easily be identified.
I also came through this link, but due to very limited background in Image Processing, I am unable to understand this. Any guidance regarding this link can also help me a lot.
I am using opencv in Android.
Any help is highly appreciated.
I saw a video on Youtube related to RayTracing on video games rendering, and there I could see that the Q2VKPT engine creator uses a "Temporal Filter" using multiple frames to get a clear image (ASVGF).
https://www.youtube.com/watch?v=tbsudki8Sro
from 26:20 to 28:40
Maybe, if you have three different images for the fingerprints and then you use a similar approach, you could get a picture with less noise that works better.
I am working on chroma key (green screen) filter for android using opengl; the only difference is I am trying to replace not only green background but any color passed by user. I have been able to replace the color but the problem is that it also replaces color from object where light intensity is very high.
Can anyone help me to reduce the light glare from texture so that my filter can work as expected?
Or any reference greenscreen filter which works perfectly.
Anything will be welcomed.
EDIT : I have added screenshot to explain the situation, Here I tried to replace red background with these clouds, it worked for all area excluding the one having glare of light in it. I can overcome this by increasing the tolerance value but then that will make it to replace some yellow pixels from the object.
Algorithm wise just matching RGB colors is going to be difficult.
The first thing to note is that in your case you are really just looking at some form of luminance-sensitive value, not a pure chroma value. Bright pixels will always have a strong response in R, G, and B channels, so simple thresholding isn't going to give a reliable workarond here.
If you extracted a luminance-independent chroma value (like a YUV encoding would do) then you could isolate "redness", "greenness", and "blueness" independent of the brightness of the color.
However, this is still going to have edge cases. For example, what happens if your bottle is red, or has a red label? To solve this you need some form of geometry feature extraction (e.g. sorbel edge detect over the image), and then color replace only within a specific identified region.
None of this is trivial - accurate object detection and recognition is very hard and still very much under active research (e.g. think Tesla cruise control trying to spot and avoid objects while driving).
Hi I am developing a camera application in that I have to do black and white image processing.I goggled and found only gray scale image processing. I want to convert my image into black and white like cam scanner.Also I tried with openCv but the result is not up to our expectation.If anybody solved this means please let me know. Thank you.
You will start with a grayvalue int[] or byte[] array with intensity values in the range [0, 255]. What you need is a threshold thres, so that all pixels with intensity below that threshold are set to black (0) and all pixels with intensity equal or above that threshold are set to white (255). For tetermining the optimal threshold the Otsu method is a well established approach. It is rather intuitive. Since the threshold will divide the pixels into two subsets you take that threshold value that minimizes the variance within the two subsets - which is the same as maximizing the variance between the two subsets. As you see from the Wikipedia Link, the calculation is rather simple, they also provide the Java code. I work with this too and it is rather efficient.
I want to get, in RGB (or anything I can convert to RGB later), the middle color value of a YUV image. So the color of the centre XY pixel.
Theres nice code out there to convert the whole pixel array from an Android camera to RGB...but this seems a bit wasteful if I just want the center pixel.
Normally Id just look at the loop and figure out where its processing the middle pixel....but I dont understand the YUV or the conversion code well enough to figure out where the data I need is.
Any help or pointers?
Cheers
-Thomas
Using this guide here;
stackoverflow.com/questions/5272388/extract-black-and-white-image-from-android-cameras-nv21-format
Explains the process fairly well.
However, it seems I was having a different problem then I expected,but its different enough to repost the question.
I have tried two different approaches for capturing an image from android camera hardware when User clicks on a capture-picture button. One with calling autoFocus and after the autoFocusCallback completes with a success response, capture the image. Two, capturing the image without calling autoFocus at all. In both cases, I noticed that the resultant byte-array that is passed to the onPictureTaken method has different lengths. The one that comes after autoFocus completes successfully and invokes the autoFocusCallback is usually at least a 50K bytes larger than the one when autoFocus invocation is completely ignored. Why is that so? Could somebody throw some light? What I don't understand is that when autoFocus completes successfully, shouldn't the picture have a good quality? And typically quality is the value of the bits in each of the bytes representing the RGB channels for each pixel. The overall number of pixels, and thereby total number of bytes representing the RGB channels should be the same irrespective of what values of bits are loaded into the RGB bytes. But apparently seems like there are more bytes of data included for a better clarity image after autoFocus is performed than a regular clarity image.
Been researching for over a month now. Would really appreciate a quick answer.
All the image/video capture drivers use the YUV formats for capture. The formats are either YUV420 or YUV422 in most of the cases. Refer to this link for more information on YUV formats http://www.fourcc.org/yuv.php
As mentioned by you the pictures taken after the Auto focus call are much sharper (edges are sharper and better contrast) and the same shapness is missing in images captured without auto focus.
As you know the Jpeg image compression is used to compress the data of the image, the compression is based on Macro blocks (square blocks in image). Image with sharper edges and more details needs more co-efficients to encode than image with blur, since most of the neighbouring pixels looks like they have been averaged out. This is reason why the auto focussed image is bound have more data since it has more details.