OpenCV Android: Hand detection - android

I'm using OpenCV in android. I'm a total beginner in OpenCV. My end goal is to recognize hand gestures. But for this, I first have to detect a human hand. I don't know where to start. Any help?
EDIT I have already imported OpenCV and successfully ran 2 sample projects. The face detector sample didn't work though. Now I wanna proceed to detect hand gestures. Only 4 gestures: when a user moves his hand from left to right in front of camera, same for three more directions.

Since you are planning to detect only 4 motions, hence you simplify your task to a huge degree.
Go through these links 1 2 and get yourself acquainted with basics of hand detection.
Your main aim should be to detect the hand from background, something like shown in the image below.
Once that is done, you can keep taking images in succession and process it and keep track of center and end points of the hand.
Now by simply subtracting these points of the successive image, you can easily find the possible 4 directions the hand can move in your case.

Related

Real time mark recognition on Android

I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.

How to equalize brightness, contrast, histrograms between two images using EMGUCV

What I am doing is attempting to using EMGU to perform and AbsDiff of two images.
Given the following conditions:
User starts their webcam and with the webcam stationary takes a picture.
User moves into the frame and takes another picture (WebCam has NOT moved).
AbsDiff works well but what I'm finding is that the ISO adjustments and White Balance adjustments made by certain cameras (even on Android and iPhone) are uncontrollable to a degree.
Therefore instead of fighting a losing battle I'd like to attempt some image post processing to see if I can equalize the two.
I found the following thread but it's not helping me much: How do I equalize contrast & brightness of images using opencv?
Can anyone offer specific details of what functions/methods/approach to take using EMGUCV?
I've tried using things like _EqualizeHist(). This yields very poor results.
Instead of equalizing the histograms for each image individually, I'd like to compare the brightness/contrast values and come up with an average that gets applied to both.
I'm not looking for someone to do the work for me (although code example would CERTAINLY be appreciated). I'm looking for either exact guidance or some way to point the ship in the right direction.
Thanks for your time.

How to add multiple pages in AIR for Android

I want to add multiple pages in my Android App, similar to the home screen on my phone, I want to be able to swipe left and right to see multiple pages.
I'm developing my app in Adobe Flash CC 2014 using "AIR 16.0 for Android".
Anyone know how I can do this?
You can go with different approaches for this problem. You could create some SwipeGestures to detect that or you could go the way Flash went since 1999, setup a Movieclip (or many) and listen for onMouseDown (ontouchstart) events and then say mc.startDrag(); (you want to limit the drag-movement to the X axis). Then onMouseUp (ontouchend) you can determine if the current MC is relativly cented and then tween it into the middle of the screen, or if the page is to far left/right and therefore page to the next page. There is also a Touch Drag implementation out of the box with ontouchmove .
Basicly what you are looking for is some kind of coverflow for AS3 ... or something a lot less fancy. Please make yourself comfortable with startDrag and StopDrag and you will see how you get there my doing.

android corner tracking using opencv

I am trying to track the locations of the corners of a sheet of paper as I move it relative to an Android camera (you can assume the the sheet of paper will be a completely different color than the background). I want to find the x, y coordinates of each corner on the android screen. I also want to be able to change the angle of the paper so it won't necessarily appear perfectly rectangular all the time.
I am using opencv 2.4.1 for Android, but I could not find cvgoodfeaturetotrack or cvfindcornersubpix in the packages. Right now I am thinking of using the CvCanny algorithm to find the edges, then use the edges with cvfindcontours to find the main intersections of the lines to find the corners.
Any suggestions or source code would be greatly appreciated.
I suggest you two options:
1- Use other OpenCV version where you have those functions (You can check the online documentation)
2- Use the FAST detector and SIFT descriptors. It's a widely used method for this kind of task, really up to date. It will find the best features multi-scale, robust to light conditions, etc. You have to train the marker (the sheet of paper) to extract the features with SIFT. Then use FAST detector on the camera scene to detect and track those features.

Marker Recognition on Android (recognising Rubik's Cubes)

I'm developing an augmented reality application for Android that uses the phone's camera to recognise the arrangement of the coloured squares on each face of a Rubik's Cube.
One thing that I am unsure about is how exactly I would go about detecting and recognising the coloured squares on each face of the cube. If you look at a Rubik's Cube then you can see that each square is one of six possible colours with a thin black border. This lead me to think that it should be relativly simply to detect a square, possibly using an existing marker detection API.
My question is really, has anybody here had any experience with image recognition and Android? Ideally I'd like to be able to implement and existing API, but it would be an interesting project to do from scratch if somebody could point me in the right direction to get started.
Many thanks in advance.
Do you want to point the camera at a cube, and have it understand the configuration?
Recognizing objects in photographs is an open AI problem. So you'll need to constrain the problem quite a bit to get any traction on it. I suggest starting with something like:
The cube will be photographed from a distance of exactly 12 inches, with a 100W light source directly behind the camera. The cube will be set diagonally so it presents exactly 3 faces, with a corner in the center. The camera will be positioned so that it focuses directly on the cube corner in the center.
A picture will taken. Then the cube will be turned 180 degrees vertically and horizontally, so that the other three faces are visible. A second picture will be taken. Since you know exactly where each face is expected to be, grab a few pixels from each region, and assume that is the color of that square. Remember that the cube will usually be scrambled, not uniform as shown in the picture here. So you always have to look at 9*6 = 54 little squares to get the color of each one.
The information in those two pictures defines the cube configuration. Generate an image of the cube in the same configuration, and allow the user to confirm or correct it.
It might be simpler to take 6 pictures - one of each face, and travel around the faces in well-defined order. Remember that the center square of each face does not move, and defines the correct color for that face.
Once you have the configuration, you can use OpenGL operations to rotate the cube slices. This will be a program with hundreds of lines of code to define and rotate the cube, plus whatever you do for image recognition.
In addition to what Peter said, it is probably best to overlay guide lines on the picture of the cube as the user takes the pictures. The user then lines up the cube within the guide lines, whether its a single side (a square guide line) or three sides (three squares in perspective). You also might want to have the user specify the number of colored boxes in each row. In your code, sample the color in what should be the center of each colored box and compare it to the other colored boxes (within some tolerance level) to identify the colors. In addition to providing the recognized results to the user, it would be nice to allow the user to make changes to the recognized colors. It does not seem like fancy image recognition is needed.
Nice idea, I'm planing to use computer vision and marker detectors too, but for another project. I am still looking if there is any available information on the web, ex: linking openCV or ARtoolkit to the Android SDK. If you have any additional information, about how to link a computer vision API, please let me know.
See you soon and goodluck!
NYARToolkit uses marker detection and is made in JAVA (as well as managed C# for windows devices). I don't know how well it works on the android platform, but I have seen it used on windows mobile devices, and its very well done.
Good luck, and happy programming!
I'd suggest looking at the Andoid OpenCV library. You probably want to examine the blob detection algorithms. You may also want to consider Hough lines or Countours to detect quads.

Categories

Resources