I am trying to detect if the user is moving the device in a diagonal motion (like a sword move). Even if it is vertical move then fine. I just need to know that he moved it from top to bottom. I am exploring sensors (mainly accelerator, gyroscope and linear accelerometer).
I can't seem to figure out how to do the detection. Any working example out there? or Pointer?
Thank you
I want to create a Glass app that responds to a finger waved past the Glass camera much like the shape splitter mini game does. For those of you who are unfamiliar with shape splitter a screen cast to a phone of it is shown here, the swiping is created by moving your finger in front of glass' camera. http://youtu.be/aKGgT8H0AJM?t=4m27s
I'm curious as to how this works and how I can use my hand as an interactive part of my application. Does anyone know how this is done? Or have any suggestions for recreating the same effect?
I'm developing an Android application. I want to do the following:
I will have a black screen with an object in its center, for example, a vase.
With this app, I will a 360 degrees view of vase. I explain: imagine the vase is the center of an imaginary circle. I want to make user follow this circle, to see the vase from any point of view. I don't know if I explain it well.
In real life, you can move around a vase and see it in front, behind, and other sides. I want to simulate this.
My problem is that I'm not sure if I can simulate this using accelerometer.
Who can I know if user is describing a circle with the mobile phone?
If you don't understand me or you need more details, please tell me.
You should combine accelerometer with compass. Compass gives you direction.
I want to create some sort of graphical arrow, or possibly draw an arrow over a compass to show the user what direction the wind is coming from. This would obviously change, given the orientation of the persons handset.
My application can tell me what direction (in degrees) the wind direction is coming from.
My question is, what is the best way to implement something like this?
Thanks
In your Exclipse create a new Android project and select "Create project from existing sample". Choose target android version and then ApiDemos. There you will find a Compass application and many other examples which can help you draw your screen.
I guess the best would be if your wind arrow would be in 3D or simulated 3D, so that it does not matter how the user is holding his device, for he would always look at the wind arrow from an elevated virtual vintage point.
In the same ApiDemos there is also "Sensors" demo which draws the physical orientation of the device.
Draw a compass, draw the wind arrow accordingly.
If the device knows its orientation, rotate the whole thing so that N on the compass points to actual North.
Then ask users whether they are happy with this setup, if not, why, improve, etc. But start with something dead simple, like the above.
I'm developing an augmented reality application for Android that uses the phone's camera to recognise the arrangement of the coloured squares on each face of a Rubik's Cube.
One thing that I am unsure about is how exactly I would go about detecting and recognising the coloured squares on each face of the cube. If you look at a Rubik's Cube then you can see that each square is one of six possible colours with a thin black border. This lead me to think that it should be relativly simply to detect a square, possibly using an existing marker detection API.
My question is really, has anybody here had any experience with image recognition and Android? Ideally I'd like to be able to implement and existing API, but it would be an interesting project to do from scratch if somebody could point me in the right direction to get started.
Many thanks in advance.
Do you want to point the camera at a cube, and have it understand the configuration?
Recognizing objects in photographs is an open AI problem. So you'll need to constrain the problem quite a bit to get any traction on it. I suggest starting with something like:
The cube will be photographed from a distance of exactly 12 inches, with a 100W light source directly behind the camera. The cube will be set diagonally so it presents exactly 3 faces, with a corner in the center. The camera will be positioned so that it focuses directly on the cube corner in the center.
A picture will taken. Then the cube will be turned 180 degrees vertically and horizontally, so that the other three faces are visible. A second picture will be taken. Since you know exactly where each face is expected to be, grab a few pixels from each region, and assume that is the color of that square. Remember that the cube will usually be scrambled, not uniform as shown in the picture here. So you always have to look at 9*6 = 54 little squares to get the color of each one.
The information in those two pictures defines the cube configuration. Generate an image of the cube in the same configuration, and allow the user to confirm or correct it.
It might be simpler to take 6 pictures - one of each face, and travel around the faces in well-defined order. Remember that the center square of each face does not move, and defines the correct color for that face.
Once you have the configuration, you can use OpenGL operations to rotate the cube slices. This will be a program with hundreds of lines of code to define and rotate the cube, plus whatever you do for image recognition.
In addition to what Peter said, it is probably best to overlay guide lines on the picture of the cube as the user takes the pictures. The user then lines up the cube within the guide lines, whether its a single side (a square guide line) or three sides (three squares in perspective). You also might want to have the user specify the number of colored boxes in each row. In your code, sample the color in what should be the center of each colored box and compare it to the other colored boxes (within some tolerance level) to identify the colors. In addition to providing the recognized results to the user, it would be nice to allow the user to make changes to the recognized colors. It does not seem like fancy image recognition is needed.
Nice idea, I'm planing to use computer vision and marker detectors too, but for another project. I am still looking if there is any available information on the web, ex: linking openCV or ARtoolkit to the Android SDK. If you have any additional information, about how to link a computer vision API, please let me know.
See you soon and goodluck!
NYARToolkit uses marker detection and is made in JAVA (as well as managed C# for windows devices). I don't know how well it works on the android platform, but I have seen it used on windows mobile devices, and its very well done.
Good luck, and happy programming!
I'd suggest looking at the Andoid OpenCV library. You probably want to examine the blob detection algorithms. You may also want to consider Hough lines or Countours to detect quads.