I read in many places like: One Screen Deserves Another that: "The sensor coordinate system used by the API for the natural orientation of the device does not change as the device moves, and is the same as the OpenGL coordinate system."
Now, I get the same reading as this image:
What I don't understand is: If the coordinate system doesn't change if I rotate the phone (always with the screen facing the user), the gravity force should be applied on Y axis, always. It should only change the axis if I put the phone in a position where the screen is not facing the user anymore like resting on a table, where gravity should be applied on Z axis.
What is wrong with my understanding?
Thanks! Guillermo.
The axis are swapped when the device's screen orientation changes. Per the article you cited:
However, the Android sensor APIs define the sensor coordinate space to be relative to the top and side of the device — not the short and long sides. When the system reorients the screen in response to holding the phone sideways, the sensor coordinate system no longer lines up with the screen’s coordinate system, and you get unexpected rotations in the display of your app.
To access the un-swapped values if you'd like, use indices 3, 4 and 5 in values[], otherwise some of the suggestions mentioned in that same article work quite nicely!
Quite old question, but I find it still relevant. As of today, Sensor Overview page in its Sensor Coordinate System chapter still says:
The most important point to understand about this coordinate system is that the axes are not swapped when the device's screen orientation changes—that is, the sensor's coordinate system never changes as the device moves. This behavior is the same as the behavior of the OpenGL coordinate system.
To me this wording is still confusing, of course it might be because I'm not an English native speaker.
My understanding would be that in Android (as is in iOS) the coordinate system assumed by sensors is integral with the device. That is, the coordinate system is stuck with the device and its axes rotate along with the device.
So, for a phone whose natural orientation is portrait, Y-axis points upward when the phone is held vertically in portrait in front of user. See image below, from same Android guide:
Then when user rotates the phone to landscape left orientation (so with home button on the right side), the Y-axis points to the left. See image below, from a Matlab tutorial (although screen is not really user facing anymore):
Then there's the frequently cited post from Android dev blog, One Screen Turn Deserves Another that says:
The sensor coordinate system used by the API for the natural orientation of the device does not change as the device moves, and is the same as the OpenGL coordinate system
which to me sounds exactly the opposite as my previous reasoning. But then again, in its following So What’s the Problem? chapter, you do see that when phone is rotated to landscape left, Y-axis points to the left as in my previous reasoning.
Related
Usually, there's a compass and an accelerometer on a typical smartphone these days. So, using information from these sensors can we recreate the whole orientation in which the photo was taken?
I mean like if you open the compass app on your phone, it firstly states the direction you are looking (link), how lifted or tilted is your phone in front direction ( I don't know how to state it in a better way) (link) and how much is it titled in sideways (link). Does it cover 3 degrees of freedom (i guess)?
Is it enough information to recreate that orientation of the phone?
Also if you think it's not the right place to ask such a question, can you comment where should I ask this question?
Recording sensor values, it would be possible to restore phone direction in which photo was taken. Some fundamental XR apps(e.g. Google Street View) are actually doing such.
Device rotation
There is a handy helper function, SensorManager.getRotationMatrix exactly for that purpose. Giving magnetic field and gravity sensor-values, you can obtain the device rotation matrix(=complete set of orthogonal 3d basis vectors) which is enough for you to reproduce phone direction afterwards.
Camera orientation
In some cases camera's up direction may be different from device's up direction due to screen orientation changes(portrait or landscape, locked or auto-rotate). So if you handle raw camera inputs, you may need to record screen orientation too. See Display.getRotation.
I am using Android's api 14 Camera Face Detection to draw rectangle over face detected by the camera.
It works in most devices (Galaxy Nexus, S4, S Note 2). But in S3 SGH-T999 and SGH-I747 (Tmobile and AT&T locked versions) the Face.rect object returned was outside the normal range of [-1000, 1000].
Specifically, Face.rect.left = -1165 (or other numbers < -1000).
Quote from the documentation [Camera.Face.rect]:
"The coordinates can be smaller than -1000 or bigger than 1000. But at least one vertex will be within (-1000, -1000) and (1000, 1000)."
This is the method that i use [link here] :
onFaceDetection(android.hardware.Camera.Face[], android.hardware.Camera)
Other data:
app is set to portrait only
app use front facing camera only
My questions are:
Have anyone experienced the same problem?
What does it mean by this smaller than -1000 coordinate ?
How to solve this problem in order to correctly draw the correct rectangle over detected face?
I have looked around for a week and did not find this problem asked by other users.
Again, my app works fine in other devices except those two.
Thanks in advance.
I am facing similar kind of issue. What I found is, the face rectangle, obtained from onFaceDetection callback, is from different co-ordinate system in different android phones. I tested my application in Samsung and Micromax. it follows the rectangle coordinates values as per the android documents (i.e -1000 to 1000).
When I tested my application on Sony xperia L and Sony xperia M, I observed that it does not follow the coordinate according to android doc. Rather it follows coordinates which has origin (0,0) at right top corner of the screen for portrait mode.
When I applied the matrix according to that, I found the perfect rectangle plotting. This lead me to dig little more into android stack. I believe that it is the vendor of the android phone who manipulate the rectangle coordinate not android original stack.
My question is is there any way to find out that the rectangle obtained, follows which coordinate system before drawing the rectangle?
I have a very creative requirement - I am not sure if this is feasible - but it would certainly spice up my app if it could .
Premise: On Android phones, if the screen is covered by hand(not touching, just close to the screen) or if the
phone is placed over the ear during a call the phone locks or
basically it blacks out. So there must be some tech to recognize that
my hand is near the screen.
Problem: I have an image in my app. If the
user points to the image without touching the screen, just as an
extension to the premise, I must be able to know that the user is
pointing to the image and change the image. Is this possible ?
UPDATE: An example use:
Say I want to build a fun app, on touch the image leads to some other
place. For example - I have two doors one to a car and one to a lion.
Now just when the user is about to touch door 1 - the door should show
a message saying are you sure, and then actually touching it takes you
to another place. Kinda rudimentary example, but I hope you get the
point
The feature you are talking about is the proximity sensor. See Sensor and SensorEvent.values for Sensor.TYPE_PROXIMITY.
You could get the distance of the hand from the screen, but you won't really be sure where in the XY co-ordinate system the hand is. So you won't be able to figure out whether the user is pointing to the "car door" or to the "lion door".
You could make this work on a phone with a front camera with a wide angle so it can see the whole screen. You'd have to write the software for recognizing hand movements, and translate these to screen actions.
Why not just use touch, if I may ask?
I am busy writing an app that lets me use my phone (Galaxy S2 2.2.3) as a steering wheel. Just a nerdy weekend project really.
I have gotten everything working regarding calculating the orientation of the device using SensorManager.GetOrientation() with a slight snag. The rotation around the axis which comes out of the screen and back of the phone rises from 0 to 90 degrees and then falls back down in the same manner to 0 degrees instead of proceeding to 180.
This had me really confused until I read something somewhere that suggested the API might be flipping (internally as the screen doesn't flip) orientation so the phones coordinates system is flipping from left to right landscape (Its worth noting that I have it locked in landscape mode in the Manifest). This explains the weird behaviour in terms of orientation.
Does anyone know how to stop this happening, or have I gone wrong completely in my understanding?
I'm trying to understand the gyroscope sensor output from the Nexus S. When you rotate about the z-axis (the axis perpendicular to the screen), the z-axis correctly records the angular velocity as a result of the turn. However, the y-axis also reports spikes in angular velocity, despite no change in the orientation of the device relative to the y-axis. So if I turn around when holding the phone at a particular orientation, it appears that I have not only turned around, but also tilted the phone left/right (which I haven't).
Why is this? Any ideas how to compensate or correct for this?
Please note I'm talking about the raw sensor output here.
First of all raw sensor data always contains some bias that has to be filtered out. Furtheron it is almost impossible to really suppress other rotational directions manually. If you suspect your Nexus to have a sensor bug or you want to know exactly what's going on I suggest to build your own apparatus. As inspiration a picture of mine :-)