Using OpenCV to analyze data from protein gels - android

So what I want to do is write an application that, at least in the future, could be ported to mobile platforms(such as android) that can scan an image of a protein gel and return data such as the number of bands(ie weights) in a column, relative concentration(thickness of the band), and the weights of each in each column.
For those who aren't familiar, mixtures of denatured proteins(basically, molecules made completed straight) are loaded into each column, and with the use of electricity the proteins are pulled through a gel(because the proteins are polar molecules). The end columns of each side of this image http://i52.tinypic.com/205cyrl.gif are where you place a mixture of proteins of known weights(so if you have 4 different weights, the band on top is the largest weight, and the weight/size of the protein decreases the further it travels down). Is something like this possible to analyze using OpenCV? The given image is a really clean looking gel, they can often get really messy(see google images). I figured if I allowed a user to enter the number of columns, which columns contain known weight markers and their actual weights, as well as provide an adjustable rectangle to size around the edges of the gel, that maybe it would be possible to scan and extract data from the images of these gels? I skimmed through a textbook on OpenCV but I didn't see any obvious and reliable way I could approach this. Any ideas? Maybe a different library would be better suited?

I believe you can do this using OpenCV
My approach would be a color based separation. And then counting the separate different components.
In big steps your app would do the following steps:
Load the image, rotate it scale manually through the GUI of your app, to match your needs
Create a second grayscale image in which each pixel contains a value between [0,255], that represents how good the color of the original point matches the target color (in the case of this image the shade of blue)
In one of my experiments I've used the concept of fuzzy sets and alpha cuts to extract objects of a certain color. The triangular membership function gave me pretty good results. This simply meant that I've defined triangular functions for all three color channels RGB, and summed their result for each color given as input. If the values of the color were close to the centers of the triangles, then I had a strong similarity. Plus, by controlling the width of the triangles you can define the tolerance of the matches. (another option would be to use trapezoidal membership functions)
At this point you have a grayscale image, where the background (gel) is black and the proteins are gray/white. If you wish to clear up some noise use the morphological operators (page 127) erode and dilate (cvErode and cvDelate in openCV).
After it, can use this great openCV based blob extraction library to extract the bounding boxes of the remaining gray areas - representing the proteins
Having all the coordinates of the bounding boxes you can apply your own algorithms, to extract whatever data you wish
In my opinion OpenCV gives you all the necesarry tools. However a fully automated solution might be hard to obtain. But I'm sure you can easily build a GUI where you can set the parameters of the operators you apply during the above described steps
As for Android: I didn't develop for mobile platforms, but I know that you can create C++ apps for these devices - have read several questions regarding iPhone & openCV -, so I think that your app would be portable, or at least the image processing part of it (the GUI might be too platform specific).

Related

Displaying a route on a floor plan image

I have a floor plan on which the walls are black, the doors are orange and the target is red. What I want is to make an app where given a specific point on the image, the route to the target is calculated and displayed. I already have a routing method, but it is in matlab and each position and object is defined in the code and it doesn't use an image. What I would like to know is how to scan the image to identify the walls, the doors and the target by color in order to apply the routing method and then display the route over the image of the map (I guess I should use drawable for that).
This are some steps to implement a pathfinding algorithmm from an image.
Upload your image
Apply a color detection HSV(in the real life is most easy control the
light changes with this format) algorithm to obtain the objects
separately.
Make a new binary Matrix with 1 for your floor and 0 to the
obstacles.
Apply to that binary Matrix an Occupancy grid algorithm(this reduce
your matrix because in the pathfinding algorithm you need
processing).
and now ur path finding algorithm. I recommend use the diijistrak or A star algorithm, in this two cases
you need construct an adjacency matrix.
The graph theory will help you to understand better.Good Luck!!
You can work in processing IDE for rapid prototipyng and migrate all the processing IDE core to eclipse, you need implement the PApplet class in your eclipse project, and can compile your app to Android.
I would use somekind of occupancy grid/map where each grid cell = one pixel (or possibly a small collection of pixels like 2x2 3x3, etc) And just do k-means clustering on the image. There are a few choices for k
k=2
you have walls is one group (the black lines)
everything else is considered opened space (this assumes doors can be opened).
You will need to know where the red point is located, but it doens't need to be visible in your map. It is just another open space in your map. that your program internally knows is the endpoint.
k=4
a group for everything black=walls(occupied), orange=doors(may or may not look like occupied cells depending on whether or not they can be opened),red=target(unoccupied), white=open space(unoccupied).
In both cases you can generate labels for your clusters and use those in your map. I'm not sure what exactly your path finding algorithm is, but typically the goal is to minimize some cost function, and as such you assign a extremely high cost to walls (so they will never be crossed), possibly assign a medium cost to doors (in case they can't be opened). Just some ideas, good luck

Android: How to detect these objects in images? (Image included). Tried OpenCV and metaioSDK, but both are not working good enough

i have been working with object detection / recognition in images captured from an android device camera recently.
the object i am trying to detect are all kinds of buttons that look like this:
Picture of buttons
so far i have been trying with OpenCV and also with the metaio SDK. results:
OpenCV was always detecting something, but gave lots of false hits. also it is too much work to collect all the pictures for what i have in mind. i have tried three ways with OpenCV:
FeatureDetection (SURF, ORB and so on) -> was way too slow and not enough features on my objects.
Template Matching -> seems to only work when the template is exactly a part out of the scene image
Training classifiers -> this worked the best so far, but is too much work for my goal, and still gives too many false detections.
metaioSDK was working ok when i took my reference images (the icon part of each button) out of a picture like shown above, then printed the full image and pointed my android device camera at the printed picture. but when i tried with the real buttons (not a picture of them) then almost nothing got detected anymore. in the metaio documentation it is said that the reference images need to have lots of features and color differences and also should not only consist of white text. well, as you see my reference images are exactly the opposite from what they should be. but thats just how the buttons look ;)
so, my question would be: does any of you have a suggestion about what else i could try to detect and recognize each of those buttons when i point my android camera at them?
As a suggestion can you try the following approach:
Class-Specific Hough Forest for Object Detection
they provide a C code implementation. Compile and run it and see the results, then replace positive and negative training images with the ones you have according the following rules:
In a car you will need to define the following 3 areas:
target region (the image you provided is a good representation of a target region)
nearby working area (this area have information regarding you target relative location) I would recommend: area 3-5 times the target regions, around the target, can be a good working area
everything outside the above can be used as negative images
then,
Use "many" positive images (100-1000) at different viewing angles (-30 - +30 degrees) and various distances.
You will have to make assumptions at which viewing angles and distances your users will use the application. The more strict they are the better performance you will get. A simple "hint" camera overlay can give a good idea to people what you expect the working area to be.
Use few times (3-5) more different negative image set which includes pictures of things that might be in the camera but should not contribute any target position information.
Do not use big images, somewhere around 100-300px in width should be enough
Assemble the database, and modify the configuration file that the code comes with. Run the program, see if performance is OK for your needs.
The program will return a voting map cloud of the object you are looking fore. Add gaussian blur to it, and apply some threshold to it (you will have to make another assumption for this threshold value).
Extracted mask will define the area you are looking for. The size of the masked region can give you good estimate of the object scale. Given this information it will be much easier to select proper template and perform template matching.
(Also some thoughts) You can also try to do a small trick by using goodFeaturesToTrack function with the mask you got, to get a set of locations and compare them with the corresponding locations on a template. Constuct an SSD and solve it for rotation, scale and transition parameters, by mimizing alignment error (but not sure if this approach will work)

Android Shape Recognition on Screen

I want to recognize shapes like a circle,triangle and rectangle which is drawn on screen.My main aim is a user draws a shape on screen and I need a code to recognize this shape.How should i approach this problem?
What you are trying to achieve can be quite tricky, but I happened to implement something similar a while ago, and here is the approach that I used:
stick to black & white drawings
have a smallish database of (black & white) drawings (50 or so) with a fixed resolution, let's say 256x256 (you can store them in sqlite as binary blobs if you wish). Make sure that you use decently thick lines for these drawings (10 px should be OK, or something about twice as thick as the user's input drawing). Also, the drawings should be normalized, meaning that they must have at least one of their dimensions as large as the image itself.
extract the shape drawn by the user and process it:
a) if it has an aspect ratio close to a square, then simply crop the white space around it and enlarge it such that it has the same size as your database images
b) Otherwise, it will most likely have one dimension about two times larger than the other one, in which case you crop the white space, rotate it to have the height as it's biggest dimension, enlarge it to 256x128 and then add on both sides 64 px of white space.
you'll have to compare your drawing with each of your database images pixel by pixel and determine the amount of black pixels which overlap for each database image. Then you sort these numbers and you'll get the best match. Even if the best match has less than 20% overlapping pixels, the results are usually good.
Because some shapes can be considered the same, even if they are rotated (imagine various ways to place a triangle in an image: one tip pointing up, or down, or towards one side etc), you'll probably want to rotate your input drawing around 12 - 24 times (by 15 - 30 degrees at each step) and compare each rotation to every image in your database. Given that this step will most likely require a lot of processing power, you might consider storing all the rotations of your initial database drawings in the database, as different pictures, thus making the database bigger, but saving you the effort of rotating the input image, which is costly.
Given that the above algorithm is a bit of a resource hog, you might consider having a server somewhere, which can do the actual comparisons, especially if you want to add many images to your database. Since I already implemented this algorithm for a demo application, I can already tell you that you're going to have to do a lot of pixel operations. Also, rotating images with the Android SDK can be annoying, because it changes the image dimensions...
If you are feeling adventurous, here are a couple of papers describing state of the art algorithms for tackling this problem: "Shape contexts enable efficient retrieval of similar shapes" by Greg Mori, Serge Belongie and Jitendra Malik (2001) and "Shape Matching: Similarity Measures and Algorithms" by Remco C. Veltkamp (2001). The maths might be a bit heavy, though.
You should look into GestureOverlayView.
A good tutorial is: http://www.vogella.com/articles/AndroidGestures/article.html

Best Strategy for Storing Handwriting

I am writing a mobile app (Android) that will allow the user to 'write' to a canvas using a single-touch device with 1 pixel accuracy. The app will be running on a tablet device that will be approximately standard 8 1/2" x 11" size. My strategy is to store the 'text' as vector data, in that each stroke of the input device will essentially be a vector consisting of a start point, and end point, and some number of intermediate points that help to define the shape of the vector (generated by the touchscreen/OS on touch movement). This should allow me to keep track of the order that the strokes were put down (to support undo, etc) and be flexible enough to allow this text to be re-sized, etc like any other vector graphic.
However, doing some very rough back of the envelope calculations, with a highly accurate input device and a large screen such that you can emulate on a one for one basis the standard paper notepad, that means you will have ~1,700 strokes per full page of text. Figuring, worst-case, that each stroke could be composed of up to ~20-30 individual points (a point for every pixel or so of the stroke), that means ~50,000 data points per page... WAY too big for SQLite/Android to handle with any expectation of reliability when a page is being reloaded and the vector strokes are being recreated (I have to imagine that pulling 50,000+ results from the SQLite db will exceed the CursorWindow limit of 1Mb)
I know I could break up the data retrieval into multiple queries, or could modify the stroke data such that I only add an intermediate point to help define the stroke vector shape if it is more than X pixels from a start, finish or other intermediate pixel, but I am wondering if I need to rethink this strategy from the ground up...
Any suggestions on how to tackle this problem in a more efficient way?
Thanks!
Paul
Is there any reason of using vector data in the first place? Without knowing your other requirements, it seems to me that you just need to store the data in raster / bitmap and compress it with regular compression methods such as PNG / zip / djvu (or if performance suffers, simple things like run-length-encoding / RLE).
EDIT: sorry I didn't read the question clearly. However if you only need things like "undo" and "resize", you can take a snapshot of the bitmap for every stroke (of course you only need to take a snapshot of the regions that change).
Also it might be possible to take a hybrid approach where you display a snapshot bitmap first while waiting for the (real) vector images to load.
Furthermore, I am not familiar about the android cursor limit, but SQL queries can always be rewritten to split the result in pieces (via LIMIT... OFFSET).
Solution I am using for now, although I would be open to any further suggestion!
Create a canvas View that can both convert SVG paths to Android paths, and can intercept motion events, converting them to android Paths while also storing them as SVG paths.
Display the Android Paths to the screen in onDraw()
Write the SVG Paths out to an .svg file
Paul

Image detection inside an image

I usually play a game called Burako.
It has some color playing pieces with numbers from 1-13.
After a match finishes you have to count your points.
For example:
1 == 15 points
2 == 20 points
I want to create an app that takes a picture and count the pieces for me.
So I need something that recognizes an image inside an image.
I was about to read about OpenCV since there is an Android port but it feels there should be something simpler to do this.
What do you think?
I had not used the Android port, but i think it's doable under good lighting conditions.
I would obtain the minimal bounding boxes of each of the pieces and rotate it accordingly so you can compare it with a model image.
Another way could be to get the contour of the numbers written on the piece ( which i guess are in color) and do some contour matching with the numbers.
Opencv is a big and complex framework but it's also suitable for simple tasks like this.

Categories

Resources