Android black and white image processing - android

Hi I am developing a camera application in that I have to do black and white image processing.I goggled and found only gray scale image processing. I want to convert my image into black and white like cam scanner.Also I tried with openCv but the result is not up to our expectation.If anybody solved this means please let me know. Thank you.

You will start with a grayvalue int[] or byte[] array with intensity values in the range [0, 255]. What you need is a threshold thres, so that all pixels with intensity below that threshold are set to black (0) and all pixels with intensity equal or above that threshold are set to white (255). For tetermining the optimal threshold the Otsu method is a well established approach. It is rather intuitive. Since the threshold will divide the pixels into two subsets you take that threshold value that minimizes the variance within the two subsets - which is the same as maximizing the variance between the two subsets. As you see from the Wikipedia Link, the calculation is rather simple, they also provide the Java code. I work with this too and it is rather efficient.

Related

Extract Fingerprint from Image

I have been working on Touch less Bio-metrics. I want to extract Fingerprints from image captured by normal mobile camera. I have achieved a good image, but it is not good enough to be verified by government.
The lines need to be more thick and connected.
What I have tried so far?
Below are the steps which I took to extract a fingerprint from image. It is good, but lines are disconnected and joined with other.
Changed contrast and brightness to 0.8 and 25 respectively
Converted from RGB to Gray
Applied histogram equalization
Normalized image
Applied adaptive (gaussian c) threshold for block size of 15 and constant 2
Smooth image to get rid of edges
Changed contrast and brightness again to 1.7 and -40 respectively
Applied Gaussian Blur
Add weight (alpha = 0.5, beta = -0.5 and gamma = 0)
Applied binary threshold (threshold = 10)
Original Image would be like this (I missed the original image of processed image)
And the result is the image attached (processed image).
I need lines to be more connected and separated from other lines so that Ridge Ending and Ridge Bifurcation can easily be identified.
I also came through this link, but due to very limited background in Image Processing, I am unable to understand this. Any guidance regarding this link can also help me a lot.
I am using opencv in Android.
Any help is highly appreciated.
I saw a video on Youtube related to RayTracing on video games rendering, and there I could see that the Q2VKPT engine creator uses a "Temporal Filter" using multiple frames to get a clear image (ASVGF).
https://www.youtube.com/watch?v=tbsudki8Sro
from 26:20 to 28:40
Maybe, if you have three different images for the fingerprints and then you use a similar approach, you could get a picture with less noise that works better.

How to handle specular highlights in chroma key filter?

I am working on chroma key (green screen) filter for android using opengl; the only difference is I am trying to replace not only green background but any color passed by user. I have been able to replace the color but the problem is that it also replaces color from object where light intensity is very high.
Can anyone help me to reduce the light glare from texture so that my filter can work as expected?
Or any reference greenscreen filter which works perfectly.
Anything will be welcomed.
EDIT : I have added screenshot to explain the situation, Here I tried to replace red background with these clouds, it worked for all area excluding the one having glare of light in it. I can overcome this by increasing the tolerance value but then that will make it to replace some yellow pixels from the object.
Algorithm wise just matching RGB colors is going to be difficult.
The first thing to note is that in your case you are really just looking at some form of luminance-sensitive value, not a pure chroma value. Bright pixels will always have a strong response in R, G, and B channels, so simple thresholding isn't going to give a reliable workarond here.
If you extracted a luminance-independent chroma value (like a YUV encoding would do) then you could isolate "redness", "greenness", and "blueness" independent of the brightness of the color.
However, this is still going to have edge cases. For example, what happens if your bottle is red, or has a red label? To solve this you need some form of geometry feature extraction (e.g. sorbel edge detect over the image), and then color replace only within a specific identified region.
None of this is trivial - accurate object detection and recognition is very hard and still very much under active research (e.g. think Tesla cruise control trying to spot and avoid objects while driving).

Android Shape Recognition on Screen

I want to recognize shapes like a circle,triangle and rectangle which is drawn on screen.My main aim is a user draws a shape on screen and I need a code to recognize this shape.How should i approach this problem?
What you are trying to achieve can be quite tricky, but I happened to implement something similar a while ago, and here is the approach that I used:
stick to black & white drawings
have a smallish database of (black & white) drawings (50 or so) with a fixed resolution, let's say 256x256 (you can store them in sqlite as binary blobs if you wish). Make sure that you use decently thick lines for these drawings (10 px should be OK, or something about twice as thick as the user's input drawing). Also, the drawings should be normalized, meaning that they must have at least one of their dimensions as large as the image itself.
extract the shape drawn by the user and process it:
a) if it has an aspect ratio close to a square, then simply crop the white space around it and enlarge it such that it has the same size as your database images
b) Otherwise, it will most likely have one dimension about two times larger than the other one, in which case you crop the white space, rotate it to have the height as it's biggest dimension, enlarge it to 256x128 and then add on both sides 64 px of white space.
you'll have to compare your drawing with each of your database images pixel by pixel and determine the amount of black pixels which overlap for each database image. Then you sort these numbers and you'll get the best match. Even if the best match has less than 20% overlapping pixels, the results are usually good.
Because some shapes can be considered the same, even if they are rotated (imagine various ways to place a triangle in an image: one tip pointing up, or down, or towards one side etc), you'll probably want to rotate your input drawing around 12 - 24 times (by 15 - 30 degrees at each step) and compare each rotation to every image in your database. Given that this step will most likely require a lot of processing power, you might consider storing all the rotations of your initial database drawings in the database, as different pictures, thus making the database bigger, but saving you the effort of rotating the input image, which is costly.
Given that the above algorithm is a bit of a resource hog, you might consider having a server somewhere, which can do the actual comparisons, especially if you want to add many images to your database. Since I already implemented this algorithm for a demo application, I can already tell you that you're going to have to do a lot of pixel operations. Also, rotating images with the Android SDK can be annoying, because it changes the image dimensions...
If you are feeling adventurous, here are a couple of papers describing state of the art algorithms for tackling this problem: "Shape contexts enable efficient retrieval of similar shapes" by Greg Mori, Serge Belongie and Jitendra Malik (2001) and "Shape Matching: Similarity Measures and Algorithms" by Remco C. Veltkamp (2001). The maths might be a bit heavy, though.
You should look into GestureOverlayView.
A good tutorial is: http://www.vogella.com/articles/AndroidGestures/article.html

Gamma Correction on an Image in android

I am building an android app which provides the user with some image processing functionalities. But before applying any image transformation function I would like to do gamma correction to improve the image. I know how to perform the gamma correction but I don't know what gamma value to use as the image itself doesn't have the gamma value with which the image was created. Any information regarding how to select a gamma value for a particular image will be very helpful.
It appears that what you really want is to lighten or darken the average brightness of an image to match some optimum value. Yes, the gamma function can do that. It might not be the best choice, in fact for under or over exposure a simple linear multiplication might be better. But let's stick with gamma for now.
Measure the average brightness of the image and call it a, with values from 0-255. You have a target for the optimum brightness, let's call that t. If the unknown gamma is g then you get:
t/255 = (a/255)^g
Solving for g gives:
g = log(t/255) / log(a/255)

What is a bitmap threshold?

I have read several internet articles about drawing fluids. They refer to taking a bitmap, blurring it and then applying a threshold. From what I can determine it looks like it might be some type of color replacement. Is that true?
I am not seeing any android bitmap or paint method that is called "threshold". So my question is "What is a bitmap threshold" and/or "Does android have an equivalent function?"
I think I understand what you are talking about. Imagine an image with several circles that are close to each other (but not necessarily touching). When the image gets blurred, the blured parts of the new image may touch, merge, and generally look like an amorphous blob of fluid. When you threshold the image, you effectively choose a saturation value below which all image data is discarded.
So, for example, if you wanted to threshold the image at 50%, all RGB pixel values that are greater than 50% will be kept. All others are discarded. The threshold function in this case would sum the Red, Green, and Blue colors and divide by 3. If the value is greater than 0xFF/2 the pixel is kept.
Setting how much the image gets blurred and the level of thresholding will cause the image to look more or less connected.
This code looked interesting:

Categories

Resources