getDefaultDisplay().getRotation()
retrieves values like
Surface_ROTATION_0,
Surface_ROTATION_90,
Surface_ROTATION_180,
Surface_ROTATION_270
and so on, and that's around XY plane I guess, Am i right? What if I want to know the rotation in the XZ plane or YZ plane?
What I'm trying to achieve is, for example, if I need to calculate if a runner is stopping (and I can't use the GPS) where the runner can have the cellphone in different positions, I must know which axle is the one parallel to the floor in order to check on those values if there is any deceleration and is not that the cellphone is just "jumping" with the runner.
Thanks in advance. Guillermo.
Related
From what I understand, android uses world coordinates for the rotation matrix that I would use to get the orientation of the device. However I'm looking to get the devices orientation relative to the device itself similar to how attitude is represented in iOS.
In other words the axis used for roll would be a line passing through the top and bottom of the device, the pitch axis passing through the sides of the device and the yaw axis passing vertically through the device.
I would like to know if android provides any methods that would allow me to get these orientation values or if there is a way I'd be able to do this manually.
Any help would be greatly appreciated.
I eventually figured out a solution that suited my needs. I simply used the getAngleChange() method from the SensorManager class and used a calibration matrix as the previous matrix.
This is my first post on this forum and I'm very new in programming. I want to build an application where I can see exactly where some gps-values are on my phone. I know a lot of applications, like junaio, mixare and others, but they only show the direction to the objects and they are not very accurate (they don't have the goal to project it on the exact position on screen) - so I want to build it myself. I program in android, but I think it would be the same on iPhone.
I followed the steps suggested from dabhaid :
There are three steps.
1) Determine your position and orientation using sensors.
2) Convert from GPS coordinate space to a planar coordinate space by determining the relative position and bearing of known GPS coordinates using e.g great circle distance and bearing. (your devices stays at the origin of the coordinate space with this scheme)
3) Do a perspective projection http://en.wikipedia.org/wiki/3D_projection#Perspective_projection to figure out where on the plane that is your display (ok, your camera sensor) the objects should appear, so you can augment them.
Step 1: easy, I have the gps-position and all orientations from my mobile device (x,y,z). For further refinements, I can use some algorithm to smooth this values (average, low filter, whatever).
Step 2: I don't know, what is exactly meant by planar coordinate space. I have some different approaches to convert my gps coordinate space. One of them is ECEF (earth centered), where 0,0,0 is the center of the earth. Somehow, this doesn't look good to me, because every little change of ONE axis, results in changes of the other two axis. So if I change the altitude, all of the 3 axis will change. I don't know if I can follow step 3 with this coordinate system.
In step 2 is mentioned: using haversine - this would give me the distance to the point, but I don't get x,y,z from it. Do I have to calculate x,y by using trigometry (bearing (alpha) + distance (hypotenuse)) ?
Step 3: This method looks really cool! If I have my coordinate space from Step 2, I can calculate d_x,d_y,d_z by using the formula on wikipedia. But after this step, I'm not finished yet because i just have the coordinates and for projecting it on my screen, I only need two coordinates? The text from wikipedia is continued by calculating b_x,b_y They use e_x,e_y,e_z which is the viewer's position relative to the display surface -> How can I get these values from my mobile device? (android/ios). Another approach, which is suggested from wikipedia is: Calculating b_x,b_y by by using the formula mentioned on wikipedia. In this formula they use s_x,s_y, which is the screen size and r_x,r_y which is the recording surface size. Again, how can I get the recording surface from my mobile device?
I can't find anything for it on the internet. It seems that nobody on android/ios has ever implemented a perspective projection before...
Thank you very much for all of your answeres! Also, links to useful sites would help!
I think you can find many answers in this other thread: Transform GPS-Points to Screen-Points with Perspective Projection in Android.
Hope it helped, bye!
Here's a simple solution I did on this issue.
A: Mapping GPS locations on the camera preview in Android
Hope it helped. :D
I would like to develop a personal app for this i need to detect my car's rotation.
In a previous thread i got an answert to which sensors are good for that it's okay.
Now i would like to ask you to please summerize the essential/needed mathematical relationship.
What i would like to see in my app:
- The car rotation in degrees
- The car actual speed (in general this app will be used in slow speed situation like 3-5km/h)
I think the harder part of this is the rotation detect in real time. It will be good to the app could work when i place the phone in a car holder in landscape or portrait mode.
So please summerize me which equations,formulas,realtionships are needed to calculate the car rotation. And please tell me your recomendation to which motion/position sensor are best for this purpuse (gravity,accelerometer,gyro,..)
First i thought that i will use Android 2.2 for better compatibility with my phones but for me 2.3.3 is okay too. In this case i can use TYPE_ROTATION_VECTOR which looks like a good thing but i don't really know that it can be a useful for me or not?
I don't want full source codes i would like to develop it myself but i need to know where can i start, what deep math knowlegde needed and what part of math are needed. And for sensor question: i'am a bit confused there are many sensors which are possible ok for me.
Thanks,
There is no deep math that you need. You should use TYPE_MAGNETIC_FIELD and TYPE_GRAVITY if it is available. Otherwise use TYPE_ACCELEROMETER but you need to filter the accelerometer values using Kalman filter or low pass filter. You should use the direction of the back camera as the reference. This direction is the azimuth returned by calling getOrientation, but before calling getOrientation you need to call remapCoordinateSystem(inR, AXIS_X, AXIS_Z, outR) to get the correct azimuth. Then along as the device is not laying flat, it does not matter what the device orientation is (landscape or portrait). Just make sure that the phone screen is facing the opposite direction of the car direction.
Now declare two class members startDirection and endDirection. In the beginning, startDirection and endDirection have the same values, now if the azimuth change by more than say 3 degrees, there is always a little fluctuation, then change the endDirection to this value and continue to change until say 20 returned azimuth have the same values (you have to experiment with this number). This mean that the car stop turning and then you calculate the difference between startDirection and endDirection, this gives you the degree of rotation. Now set startDirection and endDirection to this new azimuth and wait for next turn.
Hello everybody I'm developping a game that use the accellerometer to move a sprite in the Y-axis. Everything works well but the problem is that the angle at which the sprite doesn't move is 0°, but this means that when I want to move upwards the sprite I have to turn my hands in an unusual way ... So I thought I could set the base angle to 45 ° so all movements were easier. How can I do that?
I'm not sure about setting a base angle, but you could subtract 45° from your measurements of the accelerometer in your program, which would have the same effect.
Just define base vector, and calculate difference between it and data received from accelerometer. I would define base vector on startup, or average over some time to
reduce noise ( yes, accelerometers used in mobile devices are pretty low quality )
When I listen to orientation event in an android app, I get a SensorEvent, which contains 3 floats - azimuth, pitch, and roll in relation to the real-world's axis.
Now say I am building an app like labyrinth, but I don't want to force the user the be over the phone and hold the phone such that the xy plane is parallel to the ground. Instead I want to be able to allow the user to hold the phone as they wish, laying down or, perhaps, sitting down and holding the phone at an angle. In other words, I need to calibrate the phone in accordance with the user's preference.
How can I do that?
Also note that I believe that my answer has to do with getRotationMatrix and getOrientation, but I am not sure how!
Please help! I've been stuck at this for hours.
For a Labyrinth style app, you probably care more for the acceleration (gravity) vector than the axes orientation. This vector, in Phone coordinate system, is given by the combination of the three accelerometers measurements, rather than the rotation angles. Specifically, only the x and y readings should affect the ball's motion.
If you do actually need the orientation, then the 3 angular readings represent the 3 Euler angles. However, I suspect you probably don't really need the angles themselves, but rather the rotation matrix R, which is returned by the getRotationMatrix() API. Once you have this matrix, then it is basically the calibration that you are looking for. When you want to transform a vector in world coordinates to your device coordinates, you should multiply it by the inverse of this matrix (where in this special case, inv(R) = transpose(R).
So, following the example I found in the documentation, if you want to transform the world gravity vector g ([0 0 g]) to the device coordinates, multiply it by inv(R):
g = inv(R) * g
(note that this should give you the same result as reading the accelerometers)
Possible APIs to use here: invertM() and multiplyMV() methods of the matrix class.
I don't know of any android-specific APIs, but all you want to do is decrease the azimuth by a certain amount, right? So you move the "origin" from (0,0,0) to whatever they want. In pseudocode:
myGetRotationMatrix:
return getRotationMatrix() - origin