Convert ARCore LightEstimate to ARKit ARLightEstimate - android

I'm trying to find some middle ground between those two SDK in terms of lighting as read from the current-frame of the camera.
Is there a way to convert one to another, or convert them both into another metric that will help me accurately assume lighting conditions either taken from Android or iOS?
Specifically I'm interested on Ambient color and temperature so (iOS to me is closer to the desired).

Seems that this is as close as we can get to translating ambient light estimate to lumens:
In general, though, lumens will range from 0-2000, so a rough estimate
would be to multiply the brightness (ranging from 0..1) by 2000.
So if we want to convert from ARCore to ARKit we can do lightEstimation * 2000 (ARCore lightEstimation is 0..1)
If we want to convert lumens to relative brightness: ambientIntensity/2000 (ARKit returns values from 0 - 2000 in lumens)

Related

How to use the numbers from Game Rotation Vector in Android?

I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRo‌tationMatrixFromVecto‌r converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.

Is this Fourier Analysis of Luminance Signals Correct? (Android)

I'm writing an Android app that measures the luminance of camera frames over a period of time and calculates a heart beat using Fourier Analysis to find the wave's frequency. The problem is that my spectral analysis looks like this:
which is pretty much the inverse of what a spectral analysis should look like (like a normal distribution). Can I accurately assess this to find the index of the maximum magnitude, or does this spectrum reveal that my data is too noisy?
EDIT:
Here's what my camera data looks like (I'm performing FFT on this):
It looks like you have two problems going on here:
1) The FFT output often places the value for negative frequencies to the right of the positive frequencies, which seems to be the case here. Therefore, you need to move the right half of the FFT to the left, and put freq=0 in the middle.
2) In the comments you say that you're plotting the magnitude but that's clearly not the case (the magnitude should be greater than 0 and symmetric). Instead you're probably just plotting the really part. Instead, take the magnitude, or Re*Re + Im*Im, where Re and Im are the real and imaginary parts respectively. (Depending on the form of your numbers, something like Math.sqrt(Math.pow(a.re, 2) + Math.pow(a.im, 2)).)

how to calculate phone's movement in the vertical direction from rest?

I am developing an app using android OS for which I need to know how can I calculate the movement of the device up in the vertical direction.
For example, the device is at rest (point A), the user picks it up in his hand (point B), now there is a height change between point A and point B, how would i calculate that?
I have already gone through the articles about sensors and accelerometers, but I couldn't really find anything to help me with that. Anyone have any ideas?
If you integrate the acceleration twice you get position but the error is horrible. It is useless in practice. Here is an explanation why (Google Tech Talk) at 23:20. I highly recommend this video.
Now, you do not need anything accurate and that is a different story. The linear acceleration is available after sensor fusion, as described in the video. See Sensor.TYPE_LINEAR_ACCELERATION at SensorEvent. I would first try a high-pass filter to detect sudden increase in the linear acceleration along the vertical axis.
I have no idea whether it is good for your application.
You can actually establish (only) the vertical position without measuring acceleration over time. This is accomplished by measuring the angle between the direction to the center of the earth, and the direction to the magnetic north pole.
This only changes (significantly) when the altitude (height) of the phone changes. What you do is use the accelerometer and magnetometer to get two float[3] arrays, treat these as vectors, make them unit vectors, and then the angle between any two unit vectors is arccos(AxM).
Note that's dot product ie. math.acos(A[0]*B[0]+A[1]*B[1]+A[2]*B[2]) Any change in this angle corresponds to a change in height. Also note that this will have to be calibrated to real units and the ratio of change in angle to height will be different at various longitudes; But this is a method of getting an absolute value for height; though of course the angle also becomes skewed when undergoing acceleration, or when there are nearby magnets :)
you can correlate it to magnetic field sensor in microTesla
You can use dist= integral of integral of acceleration ~ sigma ~ summation
= integral of speed+constant

Is it possible to measure distance to object with camera?

Is it possible to measure distance to object with phone camera?
I mean, in my application I start the camera, facing the camera to the object (lets say house) and then press the button and it calculates the distance and shows me in screen.
If it's possible where I can find some tutorial or information about it?
I accept the question has been answered adequately (with the obvious caveats of requiring level ground and possible accuracy problems) but for those who don't believe it can be done or that it needs a video camera, let me explain the low-level math needed to do it....
The picture above shows me standing outside my house. The horizontal (d) is the distance I want to measure and the vertical (h) is the height above the ground at which I'm holding the camera. In this case 'h' is a known value when I'm holding the android camera at eye-level (approx 67 inches or 1.7 metres). When I tilt the camera to aim it directly at the point my house meets the ground, all the software needs to do is work out the angle (a) relative to vertical and it can calculate 'd' using...
d = h * tan a
Well you should read how ithinkdiff.com "measures" the distance:
Uses the angle of the iPhone to estimate the distance to a point on the ground.
Hold the iPhone in front of you, align the point in the camera and get a direct
reading of the distance. The distance can then be used in the speed tool.
So basically it takes the height of where you hold the phone (eye-level), then you must point the camera to the point where object touches the ground. Then the phone measures the inclination and with simple trigonometry it calculates distance.
This is of course not very accurate. It gets less accurate the further the object is. Also it assumes that the ground is level.
Nope. The camera can only give you image data and an image alone doesn't give you enough information to give you depth information. If you had multiple images that you had location information for or even video you could then process it to triangulate the distance, but a single image alone would not be enough to give you a distance.
You can use the technique used by our eye to get perspective of depth and distance.
1) Get 2 images of the same object from two different camera positions.
2) The distance or pixels between object in 2 images is inversely proportional to distance between camera and object.
The implementation is available at https://github.com/agnelvishal/Distance-between-camera-and-object
Here is the research paper http://dsc.ijs.si/files/papers/S101%20Mrovlje.pdf
You have the angle in the phone's accelerometer. If you calculate the tangent of this angle and multiply it by the height of the camera lens, you get the distance.
I think this App uses the approach MisterSquonk mentioned (its free). Watch the "Trigonometry" technique.
I think by using FastCV you can calculate the distance between Camera and the object. In this You dont need to know the angle or the Position of camera that you are holding above ground Level. take a look at this question here
One way to achieve this is using the DPI's in your device. You can take a picture and calculate the height. But you'll need another object as a reference and then you will be able to know the problem with this method could be the perspective between the objects
I think it could be possible doing that using the phone camera. I know that the modern phones use lenses to focus on a object. If it is possible to know their focal length and their position(displacement) to focus on the chosen object it's also possible to determinate the distance.
No. Only with two cameras in stereo mode, like the xbox 360 kinect. It takes at least 3 points to triangulate distance.

Regarding BMA-150 Acceleration sensor

At present i am working on hal part of sensors in android sdk, we are using 3- Axis BMA-150 Accelerometer sensor to get acceleration values with respect to X,y,Z Axis, I want to know whether this sensor will give o/p directly in SI units by using some calibration techniques or what ? , and i noticed that in sensor.c file they mentioned
720.0 LSG = 1G(9.8 m/s2), what is the relation between LSG and acceleration due to gravity?
what is meant by LSG
why they are multiplying the o/p of accelerometer x,y,z valuse with 9.8/720.0f . please help on this part .
Thanks
Vinay
Without knowing anything about the device, 9.8 meters per square second is the value of gravitational acceleration at Earth surface. The equation you quote seems to be definition of "LSG" and the only thing that makes sense to define in the context is the unit in which output is provided. So the device will probably give output '720.0' on the axis that is vertical. By multiplying with 9.8/720.0 you renormalize the value to SI unit (m/s^2)
(Average gravitational acceleration, denoted G (that explains the 1G), is 9.80665 m/s^2, but varies by a few percent between equator and poles and the device is probably unable to give more than two significant digits precision anyway)

Categories

Resources