Solving for calibration quaternion - android

I'm writing an Android app that requires the rotation vector. I'd like to use the TYPE_ROTATION_VECTOR but in some of my test devices the magnetometer doesn't perform well to say the least. Instead, the TYPE_GAME_ROTATION_VECTOR provides much smoother data (but I can't get direction relative to the Earth). What I ended up doing is while my data is loading, I run both virtual sensors. I now have an average quaternion for both, call them R (TYPE_ROTATION_VECTOR) and Rg (TYPE_GAME_ROTATION_VECTOR).
Once calibration is over I only run the TYPE_GAME_ROTATION_VECTOR, but would like to correct it for North. What I think I can do is something like: R = Rg * C where C is my calibration and Rg is the new TYPE_GAME_ROTATION_VECTOR data after a low pass filter. What I tried:
1. R = Rg * C
2. R * R' = Rg * C * R'
3. U = Rg * C * R' // Here U is the unit quaternion
4. C * R' = Rg' // This is because quaternion multiplication is associative
// Rg * (C * R') = U from line 3 therefore (C * R') must be
// equal to the conjugate of Rg
5. C = Rg' * R'' // I found this online somewhere (I hope this is right)
6. C = Rg' * R // R'' is just R
Now that I have C, I can take new values (after low pass filter) for the TYPE_GAME_ROTATION_VECTOR multiply them by C and get the actual rotation quaternion R that should be similar to the one that would have been provided by the TYPE_ROTATION_VECTOR with a steady North.
This gets me pretty close, but it doesn't quite work. I'm testing using a very simple AR like app that shows an item (who's position is determined by the device orientation) floating on the screen. If I leave out the calibration the character shows up and tracks perfectly, but it doesn't show up North of me (I have it fixed at (0, 1, 0) for now). If I take the rotation vector, get the quaternion, multiply by the calibration constant, the tracking gets thrown off:
Rotating the device about the Y axis shifts the item correctly horizontally, but it also adds a vertical component where rotating in the positive direction (using right hand rule) moves my item up (negative Y on the screen).
Rotating the device about the X axis shifts the item correctly vertically, but it also adds a horizontal component where rotation in the positive direction (using right hand rule) moves my item right (positive X on the screen).
Rotating the device about the Z axis works.
Sorry for the long description, I just want to make sure all the details are there. Summary of the question: I want to be able to get a rotation matrix that is roughly north and avoid using the magnetometer. I'm trying to do this by taking the average difference between TYPE_ROTATION_VECTOR and TYPE_GAME_ROTATION_VECTOR and using that to "calibrate" future values from the TYPE_GAME_ROTATION_VECTOR but it doesn't work. Does anyone know what the issue might be with how I'm calculating the calibration (or any other part of this)?
Some additional info:
private float[] values = null
public void onSensorChanged(SensorEvent event) {
values = lowPass(event.values.clone(), values);
Quaternion rawQuaternion = Quaternion.fromRotationVector(values);
Quaternion calibratedQuaternion = rawQuaternion.mult(calibration);
float[] rotationMatrix = calibratedQuaternion.getRotationMatrix();
float[] pos = new float[] { 0f, 1f, 0f, 1f };
Matrix.multiplyMV(pos, 0, rotationMatrix, 0, pos, 0);
Matrix.multiplyMV(pos, 0, matrixMVP, 0, pos, 0);
// Screen position should be found at pos[0], -pos[1] on a [-1,1] scale
}
Quaternion fromRotationVector(float[] r) {
float[] Q = new float[4];
SensorManager.getQuaternionFromVector(Q, r);
return new Quaternion(Q);
}
Quaternion mult(Quaternion q) {
Quaternion qu = new Quaternion();
qu.w = w*q.w - x*q.x - y*q.y - z*q.z;
qu.x = w*q.x + x*q.w + y*q.z - z*q.y;
qu.y = w*q.y + y*q.w + z*q.x - x*q.z;
qu.z = w*q.z + z*q.w + x*q.y - y*q.x;
return qu;
}
float[] getRotationMatrix() {
float[] M = new float[16];
float[] V = new float[] { x, y, z, w };
SensorManager.getRotationMatrixFromVector(M, V);
return M;
}

I had the same issue and did some research and realized where the problem is. So basically, by only looking at a stationary orientation of the IMU, you only align one axis of the coordinate system which is the vertical axis in the direction of gravity. That's why you rotations around Z axis works fine.
To complete your static calibrations, you have to include a planar motion and find the principal vectors of the motion which will be the, say, your X axis. Y axis follows the right-hand rule.
Simply, rotate the IMU around the global X axis and look at the gyroscope outputs of your IMU. The principal component of your gyroscope should be towards the X axis. After finding the Z axis in the first step and X axis in the second step, you can find Y axis by the cross product of the two. Using these axes, create the rotation matrix or the quaternion for the translations.

Here's what I ended up doing (there are some changes coming soon and once done I'll publish it on jcenter as a library). What this tries to solve is being able to run the Game Rotation Vector sensor (which has much less drift than the Rotation Vector sensor) while still pointing roughly north. Answer is in Kotlin:
class RotationMatrixLiveData(context Context): LiveData<FloatArray>(), SensorEventListener {
private val sensorManager = context.getSystemService(Context.SENSOR_SERVICE) as SensorManager
private val rotationSensor = sensorManager.getDefaultSensor(Sensor.TYPE_ROTATION_VECTOR)
private val gameRotationSensor =
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2)
sensorManager.getDefaultSensor(Sensor.TYPE_GAME_ROTATION_VECTOR)
else null
private var isActive = false
private var isCalibrating = false
private var rotationValues: FloatArray? = null
var calibrationCount = 0
var calibrationQuaternion: FloatArray? = null
var calibrationGameCount = 0
var calibrationGameQuat: FloatArray? = null
var calibration: Quaternion? = null
var rotationQuaternionValues = FloatArray(4)
var gameQuaternionValues = FloatArray(4)
private val rotationVectorQuaternion = Quaternion()
init {
value = floatArrayOf(
1f, 0f, 0f, 0f,
0f, 1f, 0f, 0f,
0f, 0f, 1f, 0f,
0f, 0f, 0f, 1f)
}
/**
* Starts calibrating the rotation matrix (if the game rotation vector sensor
* is available.
*/
fun beginCalibration() {
gameRotationSensor?.let {
isCalibrating = true
calibration = null
calibrationQuaternion = null
calibrationCount = 0
calibrationGameQuat = null
calibrationGameCount = 0
sensorManager.registerListener(this, rotationSensor, SensorManager.SENSOR_DELAY_FASTEST)
sensorManager.registerListener(this, it, SensorManager.SENSOR_DELAY_FASTEST)
}
}
/**
* Stop calibrating the rotation matrix.
*/
fun stopCalibration() {
isCalibrating = false
if (!isActive) {
// Not active, just turn off everthing
sensorManager.unregisterListener(this)
} else if (gameRotationSensor != null) {
// Active and has both sensors, turn off rotation and leave the game rotation running
sensorManager.unregisterListener(this, rotationSensor)
}
}
override fun onActive() {
super.onActive()
isActive = true
val sensor = gameRotationSensor ?: rotationSensor
sensorManager.registerListener(this, sensor, SensorManager.SENSOR_DELAY_FASTEST)
}
override fun onInactive() {
super.onInactive()
isActive = false
if (!isCalibrating) {
sensorManager.unregisterListener(this)
}
}
//
// SensorEventListener
//
override fun onAccuracyChanged(sensor: Sensor, accuracy: Int) {}
override fun onSensorChanged(event: SensorEvent) {
if (isCalibrating) {
if (event.sensor.type == Sensor.TYPE_ROTATION_VECTOR) {
SensorManager.getQuaternionFromVector(rotationQuaternionValues, event.values)
calibrationQuaternion?.let { quat ->
for (i in 0..3) {
rotationQuaternionValues[i] += quat[i]
}
}
calibrationQuaternion = rotationQuaternionValues
calibrationCount++
} else if (event.sensor.type == Sensor.TYPE_GAME_ROTATION_VECTOR) {
SensorManager.getQuaternionFromVector(gameQuaternionValues, event.values)
calibrationGameQuat?.let {quat ->
for (i in 0..3) {
gameQuaternionValues[i] += quat[i]
}
}
calibrationGameQuat = gameQuaternionValues
calibrationGameCount++
}
} else if (gameRotationSensor == null || event.sensor.type != Sensor.TYPE_ROTATION_VECTOR) {
// Only calculate rotation if there is no game rotation sensor or if the event is a game
// rotation
val calibrationQ = calibrationQuaternion
val calibrationQg = calibrationGameQuat
if (calibrationQ != null && calibrationQg != null) {
for (i in 0..3) {
calibrationQ[i] /= calibrationCount.toFloat()
calibrationQg[i] /= calibrationGameCount.toFloat()
}
calibration = (Quaternion(calibrationQg).apply { conjugate() } *
Quaternion(calibrationQ)).apply {
x = 0f
y = 0f
normalize()
}
}
calibrationQuaternion = null
calibrationGameQuat = null
// Run values through low-pass filter
val values = lowPass(event.values, rotationValues)
rotationValues = values
rotationVectorQuaternion.setFromRotationVector(values)
// Calibrate if available
calibration?.let { rotationVectorQuaternion.preMult(it) }
// Generate rotation matrix
value = rotationVectorQuaternion.getRotationMatrix(value)
}
}
}
For the quaternion class I'm using:
class Quaternion(val values: FloatArray = floatArrayOf(1f, 0f, 0f, 0f)) {
companion object {
fun fromRotationVector(rv: FloatArray): Quaternion {
val Q = FloatArray(4)
SensorManager.getQuaternionFromVector(Q, rv)
return Quaternion(Q)
}
}
private val buffer = FloatArray(4)
var w: Float
get() = values[0]
set(value) { values[0] = value }
var x: Float
get() = values[1]
set(value) { values[1] = value }
var y: Float
get() = values[2]
set(value) { values[2] = value }
var z: Float
get() = values[3]
set(value) { values[3] = value }
fun setFromRotationVector(rv: FloatArray) {
SensorManager.getQuaternionFromVector(values, rv)
}
fun conjugate() {
x = -x
y = -y
z = -z
}
fun getRotationMatrix(R: FloatArray? = null): FloatArray {
val matrix = R ?: FloatArray(16)
for (i in 0..3) {
buffer[i] = values[(i+1)%4]
}
SensorManager.getRotationMatrixFromVector(matrix, buffer)
return matrix
}
fun magnitude(): Float {
var mag = 0f
for (i in 0..3) {
mag += values[i]*values[i]
}
return Math.sqrt(mag.toDouble()).toFloat()
}
fun normalize() {
val mag = magnitude()
x /= mag
y /= mag
z /= mag
w /= mag
}
fun preMult(left: Quaternion) {
buffer[0] = left.w*this.w - left.x*this.x - left.y*this.y - left.z*this.z
buffer[1] = left.w*this.x + left.x*this.w + left.y*this.z - left.z*this.y
buffer[2] = left.w*this.y + left.y*this.w + left.z*this.x - left.x*this.z
buffer[3] = left.w*this.z + left.z*this.w + left.x*this.y - left.y*this.x
for (i in 0..3) {
values[i] = buffer[i]
}
}
operator fun times(q: Quaternion): Quaternion {
val qu = Quaternion()
qu.w = w*q.w - x*q.x - y*q.y - z*q.z
qu.x = w*q.x + x*q.w + y*q.z - z*q.y
qu.y = w*q.y + y*q.w + z*q.x - x*q.z
qu.z = w*q.z + z*q.w + x*q.y - y*q.x
return qu
}
operator fun times(v: FloatArray): FloatArray {
val conj = Quaternion(values.clone()).apply { conjugate() }
return multiplyQV(multiplyQV(values, v), conj.values)
}
override fun toString(): String {
return "(${w.toString(5)}(w), ${x.toString(5)}, ${y.toString(5)}, ${z.toString(5)}) |${magnitude().toString(5)}|"
}
private fun multiplyQV(q: FloatArray, r: FloatArray): FloatArray {
val result = FloatArray(4)
result[0] = r[0]*q[0]-r[1]*q[1]-r[2]*q[2]-r[3]*q[3]
result[1] = r[0]*q[1]+r[1]*q[0]-r[2]*q[3]+r[3]*q[2]
result[2] = r[0]*q[2]+r[1]*q[3]+r[2]*q[0]-r[3]*q[1]
result[3] = r[0]*q[3]-r[1]*q[2]+r[2]*q[1]+r[3]*q[0]
return result
}
}

Related

Why does the SensorEvent doesn't respond well on Bitmap and has a null?

I am making an app that involves a CameraX API and Bitmap, My goal is to make the camera not only to capture a simple image, but to take the image and to draw a Watermark text on top of the image before making the file, however the Azimuth, Pitch and Roll shows as null results when the bitmap is finished, here is the result of it. [![The actual output of the JPEG file. If I didn't explained it correctly or made a mistake while explaining, please do let me know as soon as possible, also having some questions, I'll answer them. Thank you in advance.
I'm gonna give additional meaning of the image and translate from Bulgarian to English and what it stands for:
Посока -> Direction: where(West,South,North,East) Azimuth,
Наклон -> Tilting: Pitch, Roll
private val mSensorEvent: SensorEvent? = null
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
//Azimuth, Pitch and Roll
val azimuthWMText = resources.getString(R.string.value_format_2, mSensorEvent?.values?.get(0))
val pitchWMText = resources.getString(R.string.value_format_2, mSensorEvent?.values?.get(1))
val rollWMText = resources.getString(R.string.value_format_2, mSensorEvent?.values?.get(2))
//Bitmap that contains the addWatermark method and detecting the new photo path that is been taken and implements the watermark
//BitmapFactory.decodeFile(path.toString(), BitmapFactory.Options()) -> through File
//resources, R.layout.activity_preview_photo, BitmapFactory.Options() -> through resources
val originalBitmap = AddWatermark().addWatermark(
BitmapFactory.decodeFile(photoFile.toString(), BitmapFactory.Options()),
firstWatermarkText = "Дължина: $longitudeWM${resources.getString(R.string.degrees)}, Ширина: $latitudeWM${resources.getString(R.string.degrees)}",
secondWatermarkText = "Височина: ${altitudeWM}м, Ориентация: $orientationWM",
thirdWatermarkText = "Точност: Хоризонтална: ${hozAccuracyWM}м, Вертикална: ${verAccuracyWM}м",
fourthWatermarkText = "Посока: where $azimuthWMText${resources.getString(R.string.degrees)}",
fifthWatermarkText = "Наклон: pitchTilt $pitchWMText${resources.getString(R.string.degrees)}, rollTilt $rollWMText${resources.getString(R.string.degrees)}",
sixthWatermarkText = "Дата и Час: $dateTimeFormatWMText",
AddWatermark.WatermarkOptions(
AddWatermark.Corner.TOP_LEFT,
textSizeToWidthRation = 0.017f,
paddingToWidthRatio = 0.03f,
Color.parseColor("#FF0000"),
shadowColor = Color.BLACK,
strokeOutline = null,
typeface = null
)
)
previewView.bitmap.let { originalBitmap }
val outputStream = FileOutputStream(photoFile)
originalBitmap.compress(Bitmap.CompressFormat.JPEG, 90, outputStream)
outputStream.flush()
outputStream.close()
Toast.makeText(
this#CameraActivity,
"Обработването и запазено успешно! Запазено е в: $photoFile",
Toast.LENGTH_LONG
).show()
}
The watermark class code:
class AddWatermark : AppCompatActivity() {
//Adding watermark method is here for declaring on to the top
fun addWatermark(
bitmap: Bitmap,
firstWatermarkText: String,
secondWatermarkText: String,
thirdWatermarkText: String,
fourthWatermarkText: String,
fifthWatermarkText: String,
sixthWatermarkText: String,
options: WatermarkOptions = WatermarkOptions()): Bitmap {
val result = bitmap.copy(bitmap.config, true)
val canvas = Canvas(result)
val paint = Paint(Paint.ANTI_ALIAS_FLAG or Paint.DITHER_FLAG)
//val strokePaint = Paint()
//We are including the Enum class and connecting with the data class WatermarkOptions variable
paint.textAlign = when (options.corner) {
//We include the alignment LEFT from Enum class and connecting with Paint variable
Corner.TOP_LEFT,
Corner.BOTTOM_LEFT -> Paint.Align.LEFT
//We include the alignment RIGHT from Enum class and connecting with Paint variable
Corner.TOP_RIGHT,
Corner.BOTTOM_RIGHT -> Paint.Align.RIGHT
}
/*strokePaint.textAlign = when (options.corner) {
Corner.TOP_LEFT,
Corner.BOTTOM_LEFT -> Paint.Align.LEFT
Corner.TOP_RIGHT,
Corner.BOTTOM_RIGHT -> Paint.Align.RIGHT
}
*/
//We connect the new textSize variable with the bitmap width(default is 0) and we multiply with the WatermarkOption's textSize
val textSize = result.width * options.textSizeToWidthRation
//Connecting the Paint textSize variable with the new textSize variable
paint.textSize = textSize//70.5f
//Connecting the Paint color variable with the WatermarkOptions textColor
paint.color = options.textColor
//If the shadowColor of the WMOptions is not null, then we make it as a Paint shadowLayer variable
if (options.shadowColor != null) {
paint.setShadowLayer( 2.5f, 0f, 0f, options.shadowColor)
}
/*if (options.strokeOutline != null) {
strokePaint.textSize = textSize//72f
strokePaint.color = options.strokeOutline
strokePaint.style = Paint.Style.STROKE
strokePaint.strokeWidth = 4.17f
}
*/
//If typeface of the WMOptions is not null,we make paint typeface variable and connecting with the WMOptions variable
if (options.typeface != null) {
paint.typeface = options.typeface
}
//We connect the new padding variable with the bitmap width(default is 0) and multiply with WMOptions padding
val padding = result.width * options.paddingToWidthRatio
//Create a variable that has something to do with the coordinates method
val coordinates = calculateCoordinates(
firstWatermarkText,
secondWatermarkText,
thirdWatermarkText,
fourthWatermarkText,
fifthWatermarkText,
sixthWatermarkText,
paint,
options,
canvas.width,
canvas.height,
padding)
/**drawText text as a Watermark, using Canvas**/
//canvas.drawText(firstWatermarkText, coordinates.x, coordinates.y, strokePaint)
//canvas.drawText(secondWatermarkText, coordinates.x, 240f, strokePaint) //We change the Y horizontal coordinates by typing the float number
//canvas.drawText(thirdWatermarkText, coordinates.x, 310f, strokePaint)
//canvas.drawText(fourthWatermarkText, coordinates.x, 380f, strokePaint)
//canvas.drawText(fifthWatermarkText, coordinates.x, 450f, strokePaint)
when (Build.VERSION.SDK_INT) {
//Android 11
30 -> {
canvas.drawText(firstWatermarkText, coordinates.x, coordinates.y, paint)
canvas.drawText(secondWatermarkText, coordinates.x, 240f, paint)
canvas.drawText(thirdWatermarkText, coordinates.x, 310f, paint)
canvas.drawText(fourthWatermarkText, coordinates.x, 380f, paint)
canvas.drawText(fifthWatermarkText, coordinates.x, 450f, paint)
canvas.drawText(sixthWatermarkText, coordinates.x, 520f, paint)
}
//Android 9
28 -> {
canvas.drawText(firstWatermarkText, coordinates.x, coordinates.y, paint)
canvas.drawText(secondWatermarkText, coordinates.x, 240f, paint)
canvas.drawText(thirdWatermarkText, coordinates.x, 310f, paint)
canvas.drawText(fourthWatermarkText, coordinates.x, 380f, paint)
canvas.drawText(fifthWatermarkText, coordinates.x, 450f, paint)
canvas.drawText(sixthWatermarkText, coordinates.x, 520f, paint)
}
//Android 5.0
21 -> {
canvas.drawText(firstWatermarkText, coordinates.x, coordinates.y, paint)
canvas.drawText(secondWatermarkText, coordinates.x, 270f, paint)
canvas.drawText(thirdWatermarkText, coordinates.x, 330f, paint)
canvas.drawText(fourthWatermarkText, coordinates.x, 420f, paint)
canvas.drawText(fifthWatermarkText, coordinates.x, 480f, paint)
canvas.drawText(sixthWatermarkText, coordinates.x, 540f, paint)
}
}
return result
}
//This it he corner alignment calculation method and using it on the drawText
private fun calculateCoordinates(
firstWatermarkText: String,
secondWatermarkText: String,
thirdWatermarkText: String,
fourthWatermarkText: String,
fifthWatermarkText: String,
sixthWatermarkText: String,
paint: Paint,
options: WatermarkOptions,
width: Int,
height: Int,
padding: Float
): PointF {
val x = when (options.corner) {
Corner.TOP_LEFT,
Corner.BOTTOM_LEFT -> {
padding
}
Corner.TOP_RIGHT,
Corner.BOTTOM_RIGHT -> {
width - padding
}
}
val y = when (options.corner) {
Corner.BOTTOM_LEFT,
Corner.BOTTOM_RIGHT -> {
height - padding
}
Corner.TOP_LEFT,
Corner.TOP_RIGHT -> {
val bounds = Rect()
paint.getTextBounds(firstWatermarkText, 0, firstWatermarkText.length, bounds)
paint.getTextBounds(secondWatermarkText, 0, secondWatermarkText.length, bounds)
paint.getTextBounds(thirdWatermarkText, 0, thirdWatermarkText.length, bounds)
paint.getTextBounds(fourthWatermarkText, 0, fourthWatermarkText.length, bounds)
paint.getTextBounds(fifthWatermarkText, 0, fifthWatermarkText.length, bounds)
paint.getTextBounds(sixthWatermarkText, 0, sixthWatermarkText.length, bounds)
val textHeight = bounds.height()
textHeight + padding
}
}
return PointF(x, y)
}
enum class Corner {
TOP_LEFT,
TOP_RIGHT,
BOTTOM_LEFT,
BOTTOM_RIGHT
}
data class WatermarkOptions(
val corner: Corner = Corner.BOTTOM_RIGHT,
val textSizeToWidthRation: Float = 0.04f,
val paddingToWidthRatio: Float = 0.03f,
#ColorInt val textColor: Int = Color.parseColor("#FFC800"),
#ColorInt val shadowColor: Int? = Color.BLACK,
#ColorInt val strokeOutline: Int? = Color.BLACK,
val typeface: Typeface? = null
)
}
Ok, I found out my issue and instead of using a new mSensorEvent, I used the sensor event inside "onSensorChanged(event: SensorEvent?)" instead and recasting all the lateinit variables, so here is the answer to my question:
private var azimuthDegree: Double = 0.0
private var currentAzimuth: Double = 0.0
private var currentPitch: Double = 0.0
private var currentRoll: Double = 0.0
//'dirs' is the directions array list of type String and 'currentDirection' will be used for recasting
private var dirs = ArrayList<String>()
private lateinit var currentDirection: String
override fun onResume() {
super.onResume()
window.decorView.systemUiVisibility = View.SYSTEM_UI_FLAG_FULLSCREEN
actionBar?.hide()
//Makes a register to the listener about the default sensor for the Azimuth orientation type
mSensorManager.registerListener(this,
mSensorManager.getDefaultSensor(Sensor.TYPE_ORIENTATION),
SensorManager.SENSOR_DELAY_GAME)
}
override fun onPause() {
super.onPause()
//If it fails, to unregister the listener from it's sensor
mSensorManager.unregisterListener(this)
}
/**The sensor method initializes the type sensors and calculates the gravity and geomagnetic field**/
#SuppressLint("CutPasteId")
override fun onSensorChanged(event: SensorEvent?) {
//Portrait measurements - Correct by default
try {
val windowOrientation = windowManager.defaultDisplay.orientation
//Azimuth
degreeAzimuth = event!!.values[0].toDouble()
val azimuthTxt = findViewById<TextView>(R.id.azimuthText)
//This makes sure that the calculations between Pitch and Roll, doesn't switch places when the orientation is turned between Portrait and Landscape by default
var degreeRoll: Double
var degreePitch: Double
if (windowOrientation == 1 || windowOrientation == 3) {
degreeRoll = event.values[1].toDouble()
degreePitch = event.values[2].toDouble()
} else {
degreeRoll = event.values[2].toDouble()
degreePitch = event.values[1].toDouble()
}
//Orientation calculations for the Pitch in degrees, when it's Portrait, Landscape, Reverse Portrait and Reverse Landscape, to stay the same case with each and every value
when (windowOrientation) {
Surface.ROTATION_0 -> {
degreePitch += 90
}
1 -> {
if (abs(degreeRoll) > 90) {
degreePitch -= 90
degreeRoll = if (degreeRoll < 0)
-(180 + degreeRoll)
else
180 - degreeRoll
} else {
degreePitch = 90 - degreePitch
}
}
2 -> {
degreePitch = if (degreePitch < 90)
90 - degreePitch
else
-(degreePitch - 90)
degreeRoll = -degreeRoll
}
3 -> {
if (abs(degreeRoll) > 90) {
degreePitch = -(90 + degreePitch)
degreeRoll = if (degreeRoll < 0)
-(180 + degreeRoll)
else
180 - degreeRoll
} else {
degreePitch += 90
degreeRoll = -degreeRoll
}
}
}
/*Calculations when Azimuth is been turned with a new orientation, that "azimuth"
is the new variable for making azimuth and window display to be times 90 degrees.
After that we change the new variable "azimuth" to the dirs array list.*/
//Direction display from the array list from 1 to 9 elements
val where = findViewById<TextView>(R.id.direction_text)
dirs = arrayListOf(
"North",
"North East",
"East",
"South East",
"South",
"South West",
"West",
"North West",
"North"
)
// We change the "azimuth" variable here, along with it's calculations.
var azimuth = degreeAzimuth + (windowOrientation * 90)
when (windowOrientation) {
//Portrait
Surface.ROTATION_0 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
//Landscape
Surface.ROTATION_90 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
//Reverse Portrait
Surface.ROTATION_180 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
//Reverse Landscape
Surface.ROTATION_270 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
}
currentDirection = dirs[((azimuth + 22.5) / 45.0).toInt()]
currentAzimuth = azimuth
val azimuthText = findViewById<TextView>(R.id.azimuthText)
azimuthText.text = resources.getString(R.string.value_format_2, currentAzimuth)
currentPitch = degreePitch
val pitchText = findViewById<TextView>(R.id.pitchText)
pitchText.text = resources.getString(R.string.value_format, currentPitch)
currentRoll = degreeRoll
val rollText = findViewById<TextView>(R.id.rollText)
rollText.text = resources.getString(R.string.value_format, currentRoll)
} catch (e: Exception) {
e.printStackTrace()
}
}
currentAzimuth, currentPith, currentRoll - Those are recasting and using them as an end result
azimuthDegree - This is used for calculation reasons only in order to make for every orientation of the display surface
Also the other way around since the Sensor.TYPE_ORIENTATION is deprecated from Java, the suggest way is this:
override fun onResume() {
super.onResume()
window.decorView.systemUiVisibility = View.SYSTEM_UI_FLAG_FULLSCREEN
actionBar?.hide()
//Makes a register to the listener about the default sensor for the Azimuth gravity and magnetic field
mSensorManager.registerListener(this,
mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD),
SensorManager.SENSOR_DELAY_GAME)
mSensorManager.registerListener(this,
mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER),
SensorManager.SENSOR_DELAY_GAME)
}
/**The sensor method initializes the type sensors and calculates the gravity and geomagnetic field**/
#SuppressLint("CutPasteId")
override fun onSensorChanged(event: SensorEvent?) {
//Portrait measurements - Correct by default
try {
val windowOrientation = windowManager.defaultDisplay.orientation
//Announcing the alpha for timing
val alpha = 0.97f
//The Sensor Event selects a type between Accelerometer and Magnetic Field
when (event!!.sensor.type) {
Sensor.TYPE_ACCELEROMETER -> {
mGravity[0] = alpha * mGravity[0] + (1 - alpha) * event.values[0]
mGravity[1] = alpha * mGravity[1] + (1 - alpha) * event.values[1]
mGravity[2] = alpha * mGravity[2] + (1 - alpha) * event.values[2]
}
Sensor.TYPE_MAGNETIC_FIELD -> {
mGeomagnetic[0] = alpha * mGeomagnetic[0] + (1 - alpha) * event.values[0]
mGeomagnetic[1] = alpha * mGeomagnetic[1] + (1 - alpha) * event.values[1]
mGeomagnetic[2] = alpha * mGeomagnetic[2] + (1 - alpha) * event.values[2]
}
}
val rotationMatrix = FloatArray(9)
val inclinationMatrix = FloatArray(9)
val success: Boolean = SensorManager.getRotationMatrix(rotationMatrix, inclinationMatrix, mGravity, mGeomagnetic)
if (success) {
//Azimuth
degreeAzimuth = event!!.values[0].toDouble()
val azimuthTxt = findViewById<TextView>(R.id.azimuthText)
//This makes sure that the calculations between Pitch and Roll, doesn't switch places when the orientation is turned between Portrait and Landscape by default
var degreeRoll: Double
var degreePitch: Double
if (windowOrientation == 1 || windowOrientation == 3) {
degreeRoll = event.values[1].toDouble()
degreePitch = event.values[2].toDouble()
} else {
degreeRoll = event.values[2].toDouble()
degreePitch = event.values[1].toDouble()
}
// We change the "azimuth" variable here, along with it's calculations.
var azimuth = degreeAzimuth + (windowOrientation * 90)
when (windowOrientation) {
//Portrait
Surface.ROTATION_0 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
//Landscape
Surface.ROTATION_90 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
//Reverse Portrait
Surface.ROTATION_180 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
//Reverse Landscape
Surface.ROTATION_270 -> {
if (azimuth > 360) {
azimuth -= 360
}
azimuthTxt.text = resources.getString(R.string.value_format, azimuth)
where.text = dirs[((azimuth + 22.5) / 45.0).toInt()]
}
}
currentAzimuth = azimuth
val azimuthText = findViewById<TextView>(R.id.azimuthText)
azimuthText.text = resources.getString(R.string.value_format_2, currentAzimuth)
currentPitch = degreePitch
val pitchText = findViewById<TextView>(R.id.pitchText)
pitchText.text = resources.getString(R.string.value_format, currentPitch)
currentRoll = degreeRoll
val rollText = findViewById<TextView>(R.id.rollText)
rollText.text = resources.getString(R.string.value_format, currentRoll)
}
}
Then onImageSave() method when assign the AddWatermark class within it as it follows:
val originalBitmap = AddWatermark().addWatermark(
BitmapFactory.decodeFile(photoFile.absolutePath),
firstWatermarkText = "Azimuth:$dirs, $currentAzimuth",
secondWatermarkText = "Pitch: $currentPitch, Roll: $currentRoll",
AddWatermark.WatermarkOptions(
AddWatermark.Corner.TOP_LEFT,
textSizeToWidthRatio = 0.017f,
paddingToWidthRatio = 0.03f,
textColor = Color.parseColor("#FF0000"),
shadowColor = null,
typeface = null
)
)
viewFinder.bitmap.let { originalBitmap }
val outputStream = FileOutputStream(photoFile.toString())
originalBitmap.compress(Bitmap.CompressFormat.JPEG, 90, outputStream)
outputStream.flush()
outputStream.close()

" 'onSensorChanged' overrides nothing " Fault

I'm trying to get values from SensorManager. I copied the code from Android API. But problems occurred. Please look at the code.
I was working on the gyroscope sensor. I wanted to examine gyroscope values and results. I found codes on this website
" https://developer.android.com/guide/topics/sensors/sensors_motion#sensors-motion-gyro "
I took error message at override fun onSensorChanged(event: SensorEvent?)
It says " 'onSensorChanged' overrides nothing "
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
}
// Create a constant to convert nanoseconds to seconds.
private val NS2S = 1.0f / 1000000000.0f
private val deltaRotationVector = FloatArray(4) { 0f }
private var timestamp: Float = 0f
override fun onSensorChanged(event: SensorEvent?) {
// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0f && event != null) {
val dT = (event.timestamp - timestamp) * NS2S
// Axis of the rotation sample, not normalized yet.
var axisX: Float = event.values[0]
var axisY: Float = event.values[1]
var axisZ: Float = event.values[2]
// Calculate the angular speed of the sample
val omegaMagnitude: Float = sqrt(axisX * axisX + axisY * axisY + axisZ * axisZ)
// Normalize the rotation vector if it's big enough to get the axis`enter code here`
// (that is, EPSILON should represent your maximum allowable margin of error)
if (omegaMagnitude > EPSILON) {
axisX /= omegaMagnitude
axisY /= omegaMagnitude
axisZ /= omegaMagnitude
}
// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
val thetaOverTwo: Float = omegaMagnitude * dT / 2.0f
val sinThetaOverTwo: Float = sin(thetaOverTwo).toFloat()
val cosThetaOverTwo: Float = cos(thetaOverTwo).toFloat()
deltaRotationVector[0] = sinThetaOverTwo * axisX
deltaRotationVector[1] = sinThetaOverTwo * axisY
deltaRotationVector[2] = sinThetaOverTwo * axisZ
deltaRotationVector[3] = cosThetaOverTwo
Log.d("DENEME", "onSensorChanged: " + axisX)
Log.d("DENEME", "onSensorChanged: " + axisY)
Log.d("DENEME", "onSensorChanged: " + axisZ)
}
timestamp = event?.timestamp?.toFloat() ?: 0f
val deltaRotationMatrix = FloatArray(9) { 0f }
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
// User code should concatenate the delta rotation we computed with the current rotation
// in order to get the updated rotation.
// rotationCurrent = rotationCurrent * deltaRotationMatrix;
}
fun onClickDevam(view: View) // click event button to check values
{
val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE)
}
}
The override keyword in Kotlin suggests that the class is inheriting a function from a super class or interface. The Android documentation seems to be missing a pretty important step which is having your activity class implement the SensorEventListener interface.
To do this change your MainActivity declaration to something like this:
class MainActivity : AppCompatActivity(), SensorEventListener {
SensorEventListener contains the onSensorChanged function you're talking about. It will also require you to override an additional function, onAccuracyChanged, so you'll need to do this as well (but if you don't really care about accuracy changes you can leave the function's body empty - you just need to override it to satisfy the interface).
Android Studio has a handy shortcut for automatically overriding functions from interfaces which you may find useful: Ctrl+O

How to calculate 2D rotation from 3D matrix obtained from sensor TYPE_ROTATION_VECTOR (to lock one axis rotation)

I want to build application which uses phone sensor fusion to rotate 3D object in OpenGL. But I want the Z axis to be locked therefore I want basically to apply 2D rotation to my model.
In order to build 2D rotation from the 3D matrix I get from SensorManager.getRotationMatrixFromVector() I build rotation matrix for each axis as explained on the picture:
I want to apply 2D rotation matrix which would be R=Ry*Rx however this seems not working. But applying R=Rz*Ry works as expected. My guess is that Rx values are not correct.
To build the Rz, Ry, Rx matrices I looked up values used by SensorManager.getOrientation() to calculate angles:
values[0] = (float) Math.atan2(R[1], R[4]);
values[1] = (float) Math.asin(-R[7]);
values[2] = (float) Math.atan2(-R[6], R[8]);
So here is how I build matrices for each axis:
private val degConst = 180/Math.PI
private var mTempRotationMatrix = MatrixCalculations.createUnit(3)
override fun onSensorChanged(event: SensorEvent?) {
val sensor = event?.sensor ?: return
when (sensor.type) {
Sensor.TYPE_ROTATION_VECTOR -> {
SensorManager.getRotationMatrixFromVector(mTempRotationMatrix, event.values)
val zSinAlpha = mTempRotationMatrix[1]
val zCosAlpha = mTempRotationMatrix[4]
val ySinAlpha = -mTempRotationMatrix[6]
val yCosAlpha = mTempRotationMatrix[8]
val xSinAlpha = -mTempRotationMatrix[7]
val xCosAlpha = mTempRotationMatrix[4]
val rx = MatrixCalculations.createUnit(3)
val ry = MatrixCalculations.createUnit(3)
val rz = MatrixCalculations.createUnit(3)
val sina = xSinAlpha
val cosa = xCosAlpha
val sinb = ySinAlpha
val cosb = yCosAlpha
val siny = zSinAlpha
val cosy = zCosAlpha
rx[4] = cosa
rx[5] = -sina
rx[7] = sina
rx[8] = cosa
ry[0] = cosb
ry[2] = sinb
ry[6] = -sinb
ry[8] = cosb
rz[0] = cosy
rz[1] = -siny
rz[3] = siny
rz[4] = cosy
val ryx = MatrixCalculations.multiply(ry, rx)
mTempRotationMatrix = ryx
MatrixCalculations.copy(mTempRotationMatrix, mRenderer.rotationMatrix)
LOG.info("product: [" + mRenderer.rotationMatrix.joinToString(" ") + "]")
val orientation = FloatArray(3)
SensorManager.getOrientation(mTempRotationMatrix, orientation)
LOG.info("yaw: " + orientation[0] * degConst + "\n\tpitch: " + orientation[1] * degConst + "\n\troll: " + orientation[2] * degConst)
}
The question is what I am doing wrong and what values to use for the Rx matrix. Is my math applied to this problem broken? Also interesting would be to know how value of event.values[3] is related to build rotation matrix in SensorManager.getRotationMatrixFromVector().
In theory operation R=Ry*Rx should give me correct rotation but it is not the case.

How to place a 3D Model in AR at center of screen and move it near or far with respect to camera angle on axis?

Below is the code which I have tried. It gives me the desired result but it is not optimized like camToPlan or MagicPlan app. In CamToPlan app center node moves very eficiently as per camera movement. If the camera is tilt the anchornode distance changes. How to achive the same in below code?
Camera camera = arSceneView.getScene().getCamera();
Vector3 distance = Vector3.subtract(camera.getWorldPosition(), vector3CirclePosition);
float abs = Math.abs(distance.y);
float newAngleInRadian = (float) (Math.toRadians(90f) - (float) camera.getLocalRotation().x);
float zCoordinate = (float) (abs / Math.cos(newAngleInRadian));
Log.i("1", "zCoordinate::" + zCoordinate + "::" + abs);
Vector3 cameraPos = arFragment.getArSceneView().getScene().getCamera().getWorldPosition();
Vector3 cameraForward = arFragment.getArSceneView().getScene().getCamera().getForward();
Vector3 position = Vector3.add(cameraPos, cameraForward.scaled(zCoordinate));
redNodeCenter.setWorldPosition(position);
Step1: Create addWaterMark() method in your class
private var oldWaterMark : Node? = null
private fun addWaterMark() {
ModelRenderable.builder()
.setSource(context, R.raw.step1)
.build()
.thenAccept {
addNode(it)
}
.exceptionally {
Toast.makeText(context, "Error", Toast.LENGTH_SHORT).show()
return#exceptionally null
}
}
private fun addNode(model: ModelRenderable?) {
if(oldWaterMark!=null){
arSceneView().scene.removeChild(oldWaterMark)
}
model?.let {
val node = Node().apply {
setParent(arSceneView().scene)
var camera = arSceneView().scene.camera
var ray = camera.screenPointToRay(200f,500f)
// var local=arSceneView.getScene().getCamera().localPosition
localPosition = ray.getPoint(1f)
localRotation = arSceneView().scene.camera.localRotation
localScale = Vector3(0.3f, 0.3f, 0.3f)
renderable = it
}
arSceneView().scene.addChild(node)
oldWaterMark = node
}
}
Step 2: Call the addWaterMark() inside addOnUpdateListener
arSceneView.scene.addOnUpdateListener { addWaterMark() }
Note: I created oldWaterMark object for remove the old watermark
If you want to change the position change this line
camera.screenPointToRay(200f,500f)//200f -> X position, 500f -> Y position

Android monospaced font size

I am doing a graphical code editor where I can modify constants by dragging them.
I want to highlight the commands in the code with blue rectangles such that left and right borders lay in the middle of characters, but the blue rectangles are still misaligned in some cases:
My idea is to first compute the char width and char space, and then multiply them afterwards by the position of my command in my text.
val mCodePaint = new TextPaint()
mCodePaint.setTypeface(Typeface.MONOSPACE)
mCodePaint.setAntiAlias(true)
mCodePaint.setSubpixelText(true)
mCodePaint.setColor(0xFF000000)
val dimText = new Rect()
val dimText1 = new Rect()
val dimText2 = new Rect()
final val s1 = "WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW"
final val s2 = "WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW"
// dimText1.width() = char_length * s1.length + space_between_chars*(s1.length-1)
// dimText2.width() = char_length * s2.length + space_between_chars*(s2.length-1)
def getCharWidth(): Float = {
mCodePaint.getTextBounds(s1, 0, s1.length, dimText1)
mCodePaint.getTextBounds(s2, 0, s2.length, dimText2)
(dimText2.width() * (s1.length - 1) - dimText1.width() *(s2.length - 1))/(s1.length - s2.length)
}
def getIntercharWidth(): Float = {
mCodePaint.getTextBounds(s1, 0, s1.length, dimText1)
mCodePaint.getTextBounds(s2, 0, s2.length, dimText2)
(dimText1.width * s2.length - dimText2.width * s1.length)/(s1.length - s2.length)
}
// The main function that draw the text
def drawRuleCode(canvas: Canvas, ...): Unit = {
var char_width = getCharWidth() // At run time, equals 29
var space_width = getIntercharWidth() // At run time, equals -10
for(action <- ...) {
...
val column = action.column
val length = action.length
val x1 = left_x+8 + column*char_width + (column-1)*space_width - 0.5f*space_width
val x2 = x1 + length*char_width + (length-1)*space_width + 1*space_width
rectFData.set(x1, y1, x2, y2)
canvas.drawRoundRect(rectFData, 5, 5, selectPaint)
}
for(line <- ...) {
...
canvas.drawText(s, left_x + 8, ..., mCodePaint)
}
Do you have any idea on how to overcome that small alignment problem? Sometimes it makes a huge difference, especially when the expression is long.
EDIT: I drawed the computed text bounds, and actually they are wrong. The text is slightly larger than the rectangle given by getTextBounds (violet line):
Instead of using getTextBounds, I need to pass the scale argument, because the font size does not scale linearly with the canvas:
Explanation here
var c = new Matrix()
val c_array = new Array[Float](9)
// The main function that draw the text
def drawRuleCode(canvas: Canvas, ...): Unit = {
var box_width = getBoxWidth()
canvas.getMatrix(c)
c.getValues(c_array)
val scale = c_array(Matrix.MSCALE_X) // Compute the current matrix scale
var box_width = getBoxWidth(scale)
for(action <- ...) {
...
val column = action.column
val length = action.length
val x1 = left_x+8 + column*box_width
val x2 = x1 + length*box_width
rectFData.set(x1, y1, x2, y2)
canvas.drawRoundRect(rectFData, 5, 5, selectPaint)
}
def getBoxWidth(scale: Float): Float = {
mCodePaint.setTextSize(fontSize * scale)
val result = mCodePaint.measureText(s1).toFloat / s1.length / scale
mCodePaint.setTextSize(fontSize )
result
}

Categories

Resources