We are trying to calculate the rise over the run of an object. We know the angle of a right triangle and we know the run. When we use the scientific Microsoft calculator we get the tangent of the angle and multiply by the run to get the rise.
angle = 7.5 tangent = 0.1316 in degrees multiply by run and ans 1.579 From this we now know how to set the X and Y coordinates of the Imageview object
We have seen all types of answers about how to do this with Java for Android none of which give the results based on the use of the MS calculator. We tried this
float T = (float) toRadians(tan(7.5));
Not even close we also tried toDegrees
So the we have two questions
How to calculate the rise knowing the run and the angle?
Is there a better way to set the X and Y value of the object so it will follow a path on a desired angle?
tan(angle) = rise/run. what you need is to rearrange this.
rise = tan(angle) * run
In PHP this is accomplished like this:
$rise = tan(deg2rad($degrees)) * $distance;
It took me a while to figure out that PHP's tan() function expects the angle to be in radians, so I had to convert to that first.
I know that's not an Android-specific response, but I'm leaving it here anyway in case it saves someone else some time.
Related
The data in my graphs use milliseconds and look approximately like this:
[1534928499109,52],[1534928522758,49],[1534928546408,51],[1534928570036,47],[1534928593671,54],
but with many thousand data points. For some reason the points stack on top of each other like in the picture I've attached. How can I fix this? This also happens with HelloCharts.
Points stacking on top of each other.
I prefer MPAndroidChart but HelloCharts got this awesome view, previewChart. Here's an example: https://github.com/lecho/hellocharts-android. Does MPAndroidCharts support previewCharts or something similar?
I am currently using a valueformatter to change milliseconds to date. Can I somehow get the difference between the smallest and biggest currently visible value and this way dynamically change the valueformatter to format more specific time?
Thanks in advance for any answers!
Only answering 3.:
chart.visibleXRange gives you the difference between the lowest and the highest visible x value. Similarly, chart.visibleYRange gives the values for the Y axis.
Be aware that (if you have defined a dragOffsetX) when scrolled all the way to the left or the right border of the chart, then the lowest or the highest value, respectively, is the lowest/highest value actually occurring in your data, but not the x value corresponding to the left/right border of the chart. To get that value, you can use chart.getValuesByTouchPoint(...) and chart.contentRect.
I use the following function to determine the exact interval between labels which helps me decide in what granularity I want to format the labels (in my case seconds vs milliseconds). The main part which transforms the rawInterval into interval is taken from com.github.mikephil.charting.renderer.AxisRenderer.computeAxisValue and translated to Kotlin:
fun calculateIntervalBetweenLabels(): Double {
val range = chart.getValuesByTouchPoint(chart.contentRect.right, 0f, YAxis.AxisDependency.LEFT).x - chart.getValuesByTouchPoint(chart.contentRect.left, 0f, YAxis.AxisDependency.LEFT).x
val rawInterval = range / chart.xAxis.labelCount
var interval = Utils.roundToNextSignificant(rawInterval).toDouble()
val intervalMagnitude = Utils.roundToNextSignificant(10.0.pow(log10(interval).toInt())).toDouble()
val intervalSigDigit = (interval / intervalMagnitude).toInt()
if (intervalSigDigit > 5) {
interval = floor(10 * intervalMagnitude)
}
return interval
}
In simpler cases without dragOffsetX, the first line could be replaced by val range = chart.visibleXRange.
In my ValueFormatter I do this:
override fun getFormattedValue(value: Float): String {
return when {
calculateIntervalBetweenLabels().roundToLong() >= 1000 -> formatValueInSeconds(value)
else -> formatValueInMilliseconds(value)
}
}
I've figured a few things out. In case anyone comes across this in the future and wonders the same thing.
MPAndroidCharts class Entry uses Float. Max value for Float is 2^23 and everything above that is rounded, the points get the same x-value. I fix this by subtracting 1.5 billion from every value and dividing by 100. Then in the ValueFormatter, I undo this.
I don't know, yet.
My solution was to calculate the difference between every value that gets formatted in the ValueFormatter. If the difference is less than zero, the formatter has looped around and that value is the displayed interval. Another solution suggested using chart.visibleXRange, which is much simpler.
I try to find the extrapolated Y value of a point on a androidplot curve.
For example, I have three points in a array: A (0; 0) B (15; 5) C (30; 0).
I displayed in androidplot with smoothing using SplineLineAndPointFormatter.
How can we do to find the Y value of the point N(10; ?)
Look at the example image
Thanks for any help in advance.
First off, you should not use that interpolation method because it uses cubic beziers and will plot false data. I posted a full answer in the linked question back in 2014.
Instead you should use the CatmullRomInterpolator. Once you've switched over to that, you can retrieve the interpolated series by invoking CatmullRomInterpolator.interpolate(XYSeries, Params). You can then retrieve N directly from the interpolated series.
i want to find the object length using camera . I have search a lot and i have found
relation between distance & view angle.
Formula angle= arctan(d/2f)
but i m frustrated and not find any relative code. so please suggest me the working
code in order to find the object height using camera. if distance from the object is
know then how to find the object length
Thanks in advance
verticalViewAngleDegrees = myCamera.getParameters().getVerticalViewAngle();
heightOfObjectFillingImage = 2 * userSpecifiedDistance * tan(toRadians(verticalViewAngleDegrees/2));
approxHeightOfObject = verticalPixelsOfObject / verticalPixelsOfWholeImage * heightOfObjectFillingImage;
I'm not confident that the trigonometry is the best that I could do, but that is a first approximation.
So, this is a common problem in apps that track your location over a journey (a run or cycle workout, for example).
Clearly GPS navigators have less trouble, since they can assume you snap to a point on a road - however, if you're running in the park, snapping to some road grid is going to give you totally crazy numbers.
The problem as far as I see it is to combine the great-circle distances between the waypoints, but taking into account the errors (accuracy values) such that you don't veer off course too far for a low-accuracy point. The rough implementation in my head involves plotting some bezier curve (using the velocity/bearing at the point to add spline direction and weight) and integrating over it.
However, clearly this is something people have sovled before. Anyone know of the implementations, or are they all buried in proprietary software?
Bonus points for anyone who can also use the (mostly) less accurate cell tower points (which come with different/out-of-sync timestamps, and no velocity or bearing information).
The eventual implementation will be in javascript or python, whichever is faster (I'm using SL4A,) but I'm looking for general algorithms here.
To get everyone started, here is the naive algorithm, not using any velocity or bearing info.
The arc length s is calculable from the two (long, lat) pairs (the start and end waypoints) of the segment we'll start with, by the standard formula.
Assuming we've converted the value pairs into standard spherical coordinates phi and theta (here as arrays, so using phi[0] and phi[1] for locations 0 and 1) in radians, the arc length is just:
from math import sin, cos, arccos, sqrt
s = arccos(
sin(phi[0]) * sin(phi[1]) * cos(theta[0] - theta[1]) +
cos(phi[0]) * cos(phi[1])
)
However, since we've got a massive horrid function, we need to use the chain rule to work out the first order errors, and we get the following monster for delta_s:
delta_s = (1.0 / abs(sin(s))) * (
delta_phi[0] * abs(
sin(phi[0]) * cos(phi[1]) -
cos(phi[0]) * sin(phi[1]) * cos(theta[0] - theta[1])
) +
delta_phi[1] * abs(
sin(phi[1]) * cos(phi[0]) -
cos(phi[1]) * sin(phi[0]) * cos(theta[0] - theta[1])
) +
(delta_theta[0] + delta_theta[1]) * abs(
sin(phi[0]) * sin(phi[1]) * sin(theta[0] - theta[1])
)
)
We perform this operation on every pair of successive points in order, sum the ss, add the errors in quadrature as normal:
accumulator = 0.0
for error in errors:
accumulator += error * error
journey_error = sqrt(accumulator)
and thus, we know the uncertainty on our rubbish distance estimate. (we can even keep the accumulator around to speed up the calculation if we add a few points on the end - as we could in practise with real time data.)
However, this is going to give us huge errors, and only a very fuzzy idea of how far we've actually gone. This can't be how actual GPS equipment estimates distances, as it would never be accurate enough unless it had amazing signal all the time:
What we need is some more nuanced path approximation, which only bends the path out of position for the type of inaccurate points shown, rather than diverting it completely and massively increasing the distance estimate — in asking the question I was hoping to find out how all the existing implementations (probably) do it!
i am creating an augmented reality app that simply visualices a textview when the phone is facing a Point of Interest (wich gps position is stored on the phone). The textview is painted on the point of interest location in the screen.
It works ok, the problem is that compass and accelerometer are very "variant", and the textview is constantly moving up and down left and right because the innacuracy of the sensors.
there is a way to solve it?
Our problem is same. I also had same problem when I create simple augmented reality project. The solution is to use exponential smoothing or moving average function. I recommend exponential smoothing because it only need to store one previous values. Sample implementation is available below :
private float[] exponentialSmoothing( float[] input, float[] output, float alpha ) {
if ( output == null )
return input;
for ( int i=0; i<input.length; i++ ) {
output[i] = output[i] + alpha * (input[i] - output[i]);
}
return output;
}
Alpha is smoothing factor (0<= alpha <=1). If you set alpha = 1, the output will be same as the input (no smoothing at all). If you set alpha = 0, the output will never change. To remove noise, you can simply smoothening accelerometer and magnetometer values.
In my case, I use accelerometer alpha value = 0.2 and magnetometer alpha value = 0.5. The object will be more stable and the movement is quite nice.
You should take a look at low-pass filters for you orientation data or sensor fusion if you want to a step further.
Good Luck with your app.
JQCorreia
I solved it with a simple trick. This will delay your results a bit but they surly avoid the inaccuracy of the compass and accelerometer.
Create a history of the last N values (so save the value to an array, increment index, when you reach N start with zero again). Then you simply use the arithmetic average of the stored values.
Integration of gyroscope sensor readings can give a huge improvement in the stability of the final estimation of the orientation.
Have a look at the steady compass application if your device has a gyroscope, or just have a look at the video if you do not have a gyroscope.
The integration of gyroscope can be done in a rather simple way using a complementary filter.