I want to create some sort of graphical arrow, or possibly draw an arrow over a compass to show the user what direction the wind is coming from. This would obviously change, given the orientation of the persons handset.
My application can tell me what direction (in degrees) the wind direction is coming from.
My question is, what is the best way to implement something like this?
Thanks
In your Exclipse create a new Android project and select "Create project from existing sample". Choose target android version and then ApiDemos. There you will find a Compass application and many other examples which can help you draw your screen.
I guess the best would be if your wind arrow would be in 3D or simulated 3D, so that it does not matter how the user is holding his device, for he would always look at the wind arrow from an elevated virtual vintage point.
In the same ApiDemos there is also "Sensors" demo which draws the physical orientation of the device.
Draw a compass, draw the wind arrow accordingly.
If the device knows its orientation, rotate the whole thing so that N on the compass points to actual North.
Then ask users whether they are happy with this setup, if not, why, improve, etc. But start with something dead simple, like the above.
Related
Im readig for the last two days and not sure about this, would like to know if there is any way to read in my program (Android app) where the user is looking with the mobile (For example: User is looking to northeast with this degrees).
There is any API to do this or I have to program it with the compass and accelerometer of the mobile?
Edit:
What i want is this blue dot know where is pointing that triangular area.
Thanks
I am writing a navigation app and I require rotation of the camera around the user (So rather than just rotating the user icon with the compass the camera rotates around the user giving the impression that the map is rotating in accordance to real life.)
I couldn't seem to find a default mode to do this I have tried the bearing tracking modes (GPS and Compass) as well as the location tracking modes:
mapboxMap.getTrackingSettings().setMyLocationTrackingMode(MyLocationTracking.TRACKING_FOLLOW);
As I was unable to get it working I implemented a custom compass with a basic low pass filter in order to rotate the camera around the user. However as of upgrading from Mapbox 4.1.1 to 4.2.1 my custom implementation has broken (Rotation has become very laggy and very jagged).
I am sure there is a much easier way to do this but I am having a bit of trouble figuring it out. Could someone please advise me as to whether I was going about it the correct way or if there is a much easier solution that I am looking over?
Thank you in advanced!
To track the user location and rotate the map to the orientation is always pointing in the same direction as the user, use these lines combined:
mapboxMap.getTrackingSettings().setMyLocationTrackingMode(MyLocationTracking.TRACKING_FOLLOW);
mapboxMap.getTrackingSettings().setMyBearingTrackingMode(MyBearingTracking.COMPASS);
Note for the full code, i'd recommend checking out this example.
Yes as #SCTaylor says, you absolutely need .setDismissAllTrackingOnGesture(false) to make this work
I have a very creative requirement - I am not sure if this is feasible - but it would certainly spice up my app if it could .
Premise: On Android phones, if the screen is covered by hand(not touching, just close to the screen) or if the
phone is placed over the ear during a call the phone locks or
basically it blacks out. So there must be some tech to recognize that
my hand is near the screen.
Problem: I have an image in my app. If the
user points to the image without touching the screen, just as an
extension to the premise, I must be able to know that the user is
pointing to the image and change the image. Is this possible ?
UPDATE: An example use:
Say I want to build a fun app, on touch the image leads to some other
place. For example - I have two doors one to a car and one to a lion.
Now just when the user is about to touch door 1 - the door should show
a message saying are you sure, and then actually touching it takes you
to another place. Kinda rudimentary example, but I hope you get the
point
The feature you are talking about is the proximity sensor. See Sensor and SensorEvent.values for Sensor.TYPE_PROXIMITY.
You could get the distance of the hand from the screen, but you won't really be sure where in the XY co-ordinate system the hand is. So you won't be able to figure out whether the user is pointing to the "car door" or to the "lion door".
You could make this work on a phone with a front camera with a wide angle so it can see the whole screen. You'd have to write the software for recognizing hand movements, and translate these to screen actions.
Why not just use touch, if I may ask?
I'm developing an Android application. I want to do the following:
I will have a black screen with an object in its center, for example, a vase.
With this app, I will a 360 degrees view of vase. I explain: imagine the vase is the center of an imaginary circle. I want to make user follow this circle, to see the vase from any point of view. I don't know if I explain it well.
In real life, you can move around a vase and see it in front, behind, and other sides. I want to simulate this.
My problem is that I'm not sure if I can simulate this using accelerometer.
Who can I know if user is describing a circle with the mobile phone?
If you don't understand me or you need more details, please tell me.
You should combine accelerometer with compass. Compass gives you direction.
I'm developing an augmented reality application for Android that uses the phone's camera to recognise the arrangement of the coloured squares on each face of a Rubik's Cube.
One thing that I am unsure about is how exactly I would go about detecting and recognising the coloured squares on each face of the cube. If you look at a Rubik's Cube then you can see that each square is one of six possible colours with a thin black border. This lead me to think that it should be relativly simply to detect a square, possibly using an existing marker detection API.
My question is really, has anybody here had any experience with image recognition and Android? Ideally I'd like to be able to implement and existing API, but it would be an interesting project to do from scratch if somebody could point me in the right direction to get started.
Many thanks in advance.
Do you want to point the camera at a cube, and have it understand the configuration?
Recognizing objects in photographs is an open AI problem. So you'll need to constrain the problem quite a bit to get any traction on it. I suggest starting with something like:
The cube will be photographed from a distance of exactly 12 inches, with a 100W light source directly behind the camera. The cube will be set diagonally so it presents exactly 3 faces, with a corner in the center. The camera will be positioned so that it focuses directly on the cube corner in the center.
A picture will taken. Then the cube will be turned 180 degrees vertically and horizontally, so that the other three faces are visible. A second picture will be taken. Since you know exactly where each face is expected to be, grab a few pixels from each region, and assume that is the color of that square. Remember that the cube will usually be scrambled, not uniform as shown in the picture here. So you always have to look at 9*6 = 54 little squares to get the color of each one.
The information in those two pictures defines the cube configuration. Generate an image of the cube in the same configuration, and allow the user to confirm or correct it.
It might be simpler to take 6 pictures - one of each face, and travel around the faces in well-defined order. Remember that the center square of each face does not move, and defines the correct color for that face.
Once you have the configuration, you can use OpenGL operations to rotate the cube slices. This will be a program with hundreds of lines of code to define and rotate the cube, plus whatever you do for image recognition.
In addition to what Peter said, it is probably best to overlay guide lines on the picture of the cube as the user takes the pictures. The user then lines up the cube within the guide lines, whether its a single side (a square guide line) or three sides (three squares in perspective). You also might want to have the user specify the number of colored boxes in each row. In your code, sample the color in what should be the center of each colored box and compare it to the other colored boxes (within some tolerance level) to identify the colors. In addition to providing the recognized results to the user, it would be nice to allow the user to make changes to the recognized colors. It does not seem like fancy image recognition is needed.
Nice idea, I'm planing to use computer vision and marker detectors too, but for another project. I am still looking if there is any available information on the web, ex: linking openCV or ARtoolkit to the Android SDK. If you have any additional information, about how to link a computer vision API, please let me know.
See you soon and goodluck!
NYARToolkit uses marker detection and is made in JAVA (as well as managed C# for windows devices). I don't know how well it works on the android platform, but I have seen it used on windows mobile devices, and its very well done.
Good luck, and happy programming!
I'd suggest looking at the Andoid OpenCV library. You probably want to examine the blob detection algorithms. You may also want to consider Hough lines or Countours to detect quads.