What is a bitmap threshold? - android

I have read several internet articles about drawing fluids. They refer to taking a bitmap, blurring it and then applying a threshold. From what I can determine it looks like it might be some type of color replacement. Is that true?
I am not seeing any android bitmap or paint method that is called "threshold". So my question is "What is a bitmap threshold" and/or "Does android have an equivalent function?"

I think I understand what you are talking about. Imagine an image with several circles that are close to each other (but not necessarily touching). When the image gets blurred, the blured parts of the new image may touch, merge, and generally look like an amorphous blob of fluid. When you threshold the image, you effectively choose a saturation value below which all image data is discarded.
So, for example, if you wanted to threshold the image at 50%, all RGB pixel values that are greater than 50% will be kept. All others are discarded. The threshold function in this case would sum the Red, Green, and Blue colors and divide by 3. If the value is greater than 0xFF/2 the pixel is kept.
Setting how much the image gets blurred and the level of thresholding will cause the image to look more or less connected.
This code looked interesting:

Related

Android black and white image processing

Hi I am developing a camera application in that I have to do black and white image processing.I goggled and found only gray scale image processing. I want to convert my image into black and white like cam scanner.Also I tried with openCv but the result is not up to our expectation.If anybody solved this means please let me know. Thank you.
You will start with a grayvalue int[] or byte[] array with intensity values in the range [0, 255]. What you need is a threshold thres, so that all pixels with intensity below that threshold are set to black (0) and all pixels with intensity equal or above that threshold are set to white (255). For tetermining the optimal threshold the Otsu method is a well established approach. It is rather intuitive. Since the threshold will divide the pixels into two subsets you take that threshold value that minimizes the variance within the two subsets - which is the same as maximizing the variance between the two subsets. As you see from the Wikipedia Link, the calculation is rather simple, they also provide the Java code. I work with this too and it is rather efficient.

How to break the image into shattered shapes in android

Am trying to break image in shattered pieces, but am unable to catch the logic, please give me way how to achieve.
I hope the below image can give my idea, what I want, Breaking the bitmap into a shattered pieces like triangle or any shape. later i will shuffle those bitmap shapes and giving puzzle to enduser rearrange them in order.
OK, if you want to rearrange the pieces (like in a jigsaw) then each triangle/polygon will have to appear in a rectangular bitmap with a transparent background, because that's how drawing bitmaps works in Java/Android (and most other environments).
There is a way to do this sort of masking in Android, its called porter-duff compositing. The Android documentation is poor to non-existent, but there are many articles on its use in Java.
Basically you create a rectangular transparent bitmap just large enough to hold your cut-out. Then you draw onto this bitmap a filled triangle (with transparency non-zero) representing the cut-out. It can be any colour you like. Then draw the cutout on top of the source image at the correct location using the Porter-Duff mode which copies the transparency data but not the RGB data. You will be left with your cutout against a transparent background.
This is much easier if you make the cutout bitmap the same size as the source image. I would recommend getting this working first. The downsides of this are twofold. Firstly you will be moving around large bitmaps to move around small cutouts, so the UI will be slower. Secondly you will use a lot of memory for bitmaps, and on some versions of Android you may well run out of memory.
But once you have it working for bitmaps the same size as the source image, it should be pretty straightforward to change it to work for smaller bitmaps. Most of your "mucking about" will be in finding and using the correct Porter-Duff mode. As there are only 16 of them, its no great effort to try them all and see what they do. And they may suggest other puzzle ideas.
I note your cutout sections are all polygons. With only a tiny amount of extra complexity, you could make them any shape you like, including looking like regular jigsaw pieces. To do this, use the Path class to define the shapes used for cutouts. The Path class works fine with Porter-Duff compositing, allowing cutouts of almost any shape you can imagine. I use this extensively in one of my apps.
I am not sure what puzzle game you are trying to make, but if there is no special requirements of the shattered pieces,
only the total number of them which can span the whole rectangle, you may try doing the following steps,
the idea is basically by knowing that n non-intersecting lines with two end points lie on any of the 4 edges of the rectangle, n+1 disjoint areas is formed.
Create an array and store the line information
For n times, you randomly pick two end points which lie on those 4 edges of the rectangle
2a. Try to join these two points: start from either end point, if you get an intersection with another line you drew before, stop at the intersection, otherwise stop at the other end point
You will get n+1 disjoint areas with n lines drawn
You may constrain your lines choosing if you have some special requirements of the areas.
For implementation details, you may want to have a look of dot product and euler's theorem

Detection of different shape's dynamically like ( Circle, square and Rectangle ) from the camera? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to create an application to detect the shape of the objects like ( circle, square and rectangle only geometry shapes ) that should not be using Marker less or Edge based way to detect the shape in augmentation.
I have used the following things for this like gone through the procedures of the tutorial that are already existing there in the metaio sdk
1) Metaio : http://dev.metaio.com/sdk/tutorials/hello-world/
2) OpenCV : http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html#canny-detector
these are the thing i have tried to implement.
Geometry shapes:
1) Circle in realtime could be any circular object-->
2) Square in realtime could be any square object-->
3) Rectangle in realtime could be any rectangle object-->
How can i achieve this scenario of the augmentation.
Thanks in advance
Update: This StackOverflow post (with some nice sample pictures included) seems to have solved the circles detection-part of your problem at least. The reference of the excellent write-up he's pointing to can be found on this wiki page (only through the wayback machine unfortunately).
In case that new link doesn't hold either, here is the relevant section:
Detecting Images:
There are a few fiddly bits that need to taken care of to detect circles in an image. Before you process an image with cvHoughCircles - the function for circle detection, you may wish to first convert it into a gray image and smooth it. Following is the general procedure of the functions you need to use with examples of their usage.
Create Image
Supposing you have an initial image for processing called 'img', first you want to create an image variable called 'gray' with the same dimensions as img using cvCreateImage.
IplImage* gray = cvCreateImage( cvGetSize(img), 8, 1 );
// allocate a 1 channel byte image
CvMemStorage* storage = cvCreateMemStorage(0);
IplImage* cvCreateImage(CvSize size, int depth, int channels);
size: cvSize(width,height);
depth: pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16U,
IPL_DEPTH_16S, IPL_DEPTH_32S, IPL_DEPTH_32F, IPL_DEPTH_64F
channels: Number of channels per pixel. Can be 1, 2, 3 or 4. The channels
are interleaved. The usual data layout of a color image is
b0 g0 r0 b1 g1 r1 ...
Convert to Gray
Now you need to convert it to gray using cvCvtColor which converts between colour spaces.
cvCvtColor( img, gray, CV_BGR2GRAY );
cvCvtColor(src,dst,code); // src -> dst
code = CV_<X>2<Y>
<X>/<Y> = RGB, BGR, GRAY, HSV, YCrCb, XYZ, Lab, Luv, HLS
e.g.: CV_BGR2GRAY, CV_BGR2HSV, CV_BGR2Lab
Smooth Image
This is done so as to prevent a lot of false circles from being detected. You might need to play around with the last two parameters, noting that they need to multiply to an odd number.
cvSmooth( gray, gray, CV_GAUSSIAN, 9, 9 );
// smooth it, otherwise a lot of false circles may be detected
void cvSmooth( const CvArr* src, CvArr* dst,
int smoothtype=CV_GAUSSIAN,
int param1, int param2);
src
The source image.
dst
The destination image.
smoothtype
Type of the smoothing:
CV_BLUR_NO_SCALE (simple blur with no scaling) - summation over a pixel param1×param2 neighborhood. If the neighborhood size is not fixed, one may use cvIntegral function.
CV_BLUR (simple blur) - summation over a pixel param1×param2 neighborhood with subsequent scaling by 1/(param1•param2).
CV_GAUSSIAN (gaussian blur) - convolving image with param1×param2 Gaussian.
CV_MEDIAN (median blur) - finding median of param1×param1 neighborhood (i.e. the neighborhood is square).
CV_BILATERAL (bilateral filter) - applying bilateral 3x3 filtering with color sigma=param1 and space sigma=param2
param1
The first parameter of smoothing operation.
param2
The second parameter of smoothing operation.
In case of simple scaled/non-scaled and Gaussian blur if param2 is zero, it is set to param1
Detect using Hough Circle
The function cvHoughCircles is used to detect circles on the gray image. Again the last two parameters might need to be fiddled around with.
CvSeq* circles =
cvHoughCircles( gray, storage, CV_HOUGH_GRADIENT, 2, gray->height/4, 200, 100 );
CvSeq* cvHoughCircles( CvArr* image, void* circle_storage,
int method, double dp, double min_dist,
double param1=100, double param2=100,
int min_radius=0, int max_radius=0 );
======= End of relevant section =========
The rest of that wiki page is actually very good (although, I'm not going to recopy it here since the rest is off-topic to the original question and StackOverflow has a size limit for answers). Hopefully, that link to the cached copy on the Wayback machine will keep on working indefinitely.
Previous Answer Before my Update:
Great! Now that you posted some examples, I can see that you're not only after rectangles, square rectangles, and circles, you also want to find those shapes in a 3D environment, thus potentially hunting for special cases of parallelograms and ovals that from video frame to video frame can eventually reveal themselves to be rectangles, squares, and/or circles (depending on how you pan the camera).
Personally, I find it easier to work through a problem myself than trying to understand how to use an existing (often times very mature) library. This is not to say that my own work will be better than a mature library, it certainly won't be. It's just that once I can work myself through a problem, then it becomes easier for me to understand and use a library (the library itself which will often run much faster and smarter than my own solution).
So the next step I would take is to change the color space of the bitmap into grayscale. A color bitmap, I have trouble understanding and I have trouble manipulating, especially since there are so many different ways it can be represented, but a grayscale bitmap, that's both much easier to understand and manipulate. For a grayscale bitmap, just imagine a grid of values, with each value representing a different light intensity.
And for now, let's limit the scope of the problem to finding parallelograms and ovals inside a static 2D environment (we'll worry about processing 3D environments and moving video frames later, or should I say, you'll worry about that part yourself since that problem is already becoming too complicated for me).
And for now also, let's not worry about what tool or language you use. Just use whatever is easiest and most expeditive. For instance, just about anything can be scripted to automatically convert an image to grayscale assuming time is no issue. ImageMagick, Gimp, Marvin, Processing, Python, Ruby, Java, etc.
And with any of those tools, it should be easy to group pixels with similar enough intensities (to make the calculations more manageable) and to sort each pixel coordinates in a different array for each light intensity bucket. In other words, it shouldn't be too difficult to arrange some sort of crude histogram of arrays sorted by intensity that contain each pixel's x and y positions.
After that, the problem becomes a problem more like this one (which can be found on StackOverflow) and thus can be worked upon with its suggested solution.
And once you're able to work through the problem in that way, then converting the solution you come up with to a better language suited for the task shouldn't be too difficult. And it should be much easier also to understand and use the underlying function of any existing library you end choosing for the task as well. At least, that's what I'm hoping for, since I'm not familiar enough and I can't really help you with the OpenCV libraries themselves.

Android Shape Recognition on Screen

I want to recognize shapes like a circle,triangle and rectangle which is drawn on screen.My main aim is a user draws a shape on screen and I need a code to recognize this shape.How should i approach this problem?
What you are trying to achieve can be quite tricky, but I happened to implement something similar a while ago, and here is the approach that I used:
stick to black & white drawings
have a smallish database of (black & white) drawings (50 or so) with a fixed resolution, let's say 256x256 (you can store them in sqlite as binary blobs if you wish). Make sure that you use decently thick lines for these drawings (10 px should be OK, or something about twice as thick as the user's input drawing). Also, the drawings should be normalized, meaning that they must have at least one of their dimensions as large as the image itself.
extract the shape drawn by the user and process it:
a) if it has an aspect ratio close to a square, then simply crop the white space around it and enlarge it such that it has the same size as your database images
b) Otherwise, it will most likely have one dimension about two times larger than the other one, in which case you crop the white space, rotate it to have the height as it's biggest dimension, enlarge it to 256x128 and then add on both sides 64 px of white space.
you'll have to compare your drawing with each of your database images pixel by pixel and determine the amount of black pixels which overlap for each database image. Then you sort these numbers and you'll get the best match. Even if the best match has less than 20% overlapping pixels, the results are usually good.
Because some shapes can be considered the same, even if they are rotated (imagine various ways to place a triangle in an image: one tip pointing up, or down, or towards one side etc), you'll probably want to rotate your input drawing around 12 - 24 times (by 15 - 30 degrees at each step) and compare each rotation to every image in your database. Given that this step will most likely require a lot of processing power, you might consider storing all the rotations of your initial database drawings in the database, as different pictures, thus making the database bigger, but saving you the effort of rotating the input image, which is costly.
Given that the above algorithm is a bit of a resource hog, you might consider having a server somewhere, which can do the actual comparisons, especially if you want to add many images to your database. Since I already implemented this algorithm for a demo application, I can already tell you that you're going to have to do a lot of pixel operations. Also, rotating images with the Android SDK can be annoying, because it changes the image dimensions...
If you are feeling adventurous, here are a couple of papers describing state of the art algorithms for tackling this problem: "Shape contexts enable efficient retrieval of similar shapes" by Greg Mori, Serge Belongie and Jitendra Malik (2001) and "Shape Matching: Similarity Measures and Algorithms" by Remco C. Veltkamp (2001). The maths might be a bit heavy, though.
You should look into GestureOverlayView.
A good tutorial is: http://www.vogella.com/articles/AndroidGestures/article.html

Bitmap.Config differences with/without alpha channel on a canvas

I am drawing some Bitmaps to a canvas. Some (most) of these bitmaps utilize the alpha channel, and transparency/translucency is critical for the image to look correct. This is necessary due to some image manipulation I perform throughout the Activity.
Eventually the user is finished with their task and I take the canvas and save it to a PNG via use of this method:
Bitmap.createBitmap(this.getWidth(), this.getHeight(), Bitmap.Config.ARGB_8888);
At this point I no longer make any modifications to the canvas/image but I do display the image to a Canvas in another Activity. I want it to look exactly the same as in the previous Activity, so does it matter if I use Bitmap.Config.RGB_565 instead or will I lose information?
You will definitely lose information if you use Bitmap.Config.RGB_565. It may not be discernible on a phone display, but you're going from 24 bits of color information (not counting alpha) to 16 bits of color information when you go from RGB_8888 to RGB_565. Instead of 256 distinct red, green, and blue values in RGB_8888, there are only 64 distinct green values and 32 distinct red and blue values in RGB_565. This may cause banding or other sorts of quantization artifacts in your RGB_565 image.

Categories

Resources