I am implementing my own camera Activity.
To rotate the captured image I need to know the orientation at the time of shooting.
Is there a higher level API for the sensor values pitch and roll that tells me my device is oriented in:
top-down - holding the phone normal, portrait
down-top
right-left - landscape the top of the phone is on the right side
left-right - landscape the top of the phone is on the left side
Or is there any other way to geht it directly from the system?
sadly ,cameras work in a weird way on android when they are not in landscape mode (which is the "natural" orientation for camera on android) .
best thing to do is set the activity to be in landscape mode , and add the onConfigurationChanged event (and add the android:configChanges="orientation" into the manifest) in order to get the current orientation .
when capturing the image, check the orientation and act according to whatever logic you wish to have.
Ok solved my problem to a certain point so that it works for me and I left out the down-top recognition.
public class DeviceOrientation {
public static final int ORIENTATION_PORTRAIT = 0;
public static final int ORIENTATION_LANDSCAPE_REVERSE = 1;
public static final int ORIENTATION_LANDSCAPE = 2;
public static final int ORIENTATION_PORTRAIT_REVERSE = 3;
int smoothness = 1;
public float averagePitch = 0;
public float averageRoll = 0;
public int orientation = ORIENTATION_PORTRAIT;
private float[] pitches;
private float[] rolls;
public DeviceOrientation(int smoothness) {
this.smoothness = smoothness;
pitches = new float[smoothness];
rolls = new float[smoothness];
}
public void addSensorEvent(SensorEvent event) {
azimuth = event.values[0];
averagePitch = addValue(event.values[1], pitches);
averageRoll = addValue(event.values[2], rolls);
orientation = calculateOrientation();
}
private float addValue(float value, float[] values) {
float average = 0;
for(int i=1; i<smoothness; i++) {
values[i-1] = values[i];
average += values[i];
}
values[smoothness-1] = value;
average = (average + value)/smoothness;
return average;
}
/** handles all 4 possible positions perfectly */
private int calculateOrientation() {
// finding local orientation dip
if (((orientation == ORIENTATION_PORTRAIT || orientation == ORIENTATION_PORTRAIT_REVERSE)
&& (averageRoll > -30 && averageRoll < 30))) {
if (averagePitch > 0)
return ORIENTATION_PORTRAIT_REVERSE;
else
return ORIENTATION_PORTRAIT;
} else {
// divides between all orientations
if (Math.abs(averagePitch) >= 30) {
if (averagePitch > 0)
return ORIENTATION_PORTRAIT_REVERSE;
else
return ORIENTATION_PORTRAIT;
} else {
if (averageRoll > 0) {
return ORIENTATION_LANDSCAPE_REVERSE;
} else {
return ORIENTATION_LANDSCAPE;
}
}
}
}
Explanation:
If I am in portrait-mode and tillt the mobil forward until it is in a horizontal position it would switch to landscape due to the rest of the code.
Therefore I check if it is in portrait and make the conditions hard to leafe this mode.
This is what I ment with the local dip.
The rest just divides into all 3 directions.
One thing is bad. If the device is in landscape_x and get tillt some degrees backwards, the pich jumps from ~2 to ~175. At that point my code is fliping inbetween landscape and portrait.
The smoothness will smooth the value for the sensor data by combining the last n values and calculating the average. It isn't realy necesary.
I hope this will help others. If you can improve the code further, please let me know.
Related
I am trying ot change FOV value ofr a VR camera in Unity. In editor works just fine but when i put the app on phone, thecamera has the defaul FOV value. Can somebody please tell me how can i effectively change the FOV value on phone?
Thanks
You can do it through code. Here's a script that will let you tap to adjust the FOV in realtime.
using UnityEngine;
using System.Collections;
using UnityEngine.VR; // you always need this to use special VR functions
public class VRUtility : MonoBehaviour {
public GameObject camGroup;
public Camera leftCam;
public Camera rightCam;
public float fov;
private float fovMin = 40;
private float fovMax = 100;
private float camGroupX = 0;
private float camGroupXMin = 0;
private float camGroupXMax = 100;
private float camGroupXStep = 20;
// Use this for initialization
public void Start () {
// set render quality to 50%, sacrificing speed for visual quality
// this is pretty important on laptops, where the framerate is often quite low
// 50% quality actually isn't that bad
VRSettings.renderScale = 3.0f;
OVRTouchpad.Create();
OVRTouchpad.TouchHandler += HandleTouchHandler;
leftCam.fieldOfView = fov;
rightCam.fieldOfView = fov;
}
// Update is called once per frame
void Update () {
if ( Input.GetKeyDown(KeyCode.R) ) {
InputTracking.Recenter(); // recenter "North" for VR, so that you don't have to twist around randomlys
}
// dynamically adjust VR visual quality in-game
if ( Input.GetKeyDown(KeyCode.RightBracket) ) { // increase visual quality
VRSettings.renderScale = Mathf.Clamp01( VRSettings.renderScale + 0.1f);
}
if ( Input.GetKeyDown(KeyCode.LeftBracket) ) { // decrease visual quality
VRSettings.renderScale = Mathf.Clamp01( VRSettings.renderScale - 0.1f);
}
}
private void HandleTouchHandler(object sender, System.EventArgs e){
var touchArgs = (OVRTouchpad.TouchArgs)e;
if (touchArgs.TouchType == OVRTouchpad.TouchEvent.SingleTap) {
fov += 10;
if (fov > fovMax) {
fov = fovMin;
}
leftCam.fieldOfView = fov;
rightCam.fieldOfView = fov;
}
if (touchArgs.TouchType == OVRTouchpad.TouchEvent.Left || touchArgs.TouchType == OVRTouchpad.TouchEvent.Right){
camGroupX += camGroupXStep;
if (camGroupX > camGroupXMax){
camGroupX = camGroupXMin;
}
camGroup.transform.position = new Vector3(camGroupX, 0, 0);
}
}
}
I'm also moving a camera between locations, but just ignore that.
Anyone knows how to get smooth vertical orientation degree in Android?
I already tried OrientationEventListener as shown below but it's very noisy. already tried all rates, Normal, Delay, Game and Fastest, all shown the same result.
myOrientationEventListener = new OrientationEventListener(this, SensorManager.SENSOR_DELAY_NORMAL) {
#Override
public void onOrientationChanged(int arg0) {
orientaion = arg0;
Log.i("orientaion", "orientaion:" + orientaion);
}
};
So there are two things going on that can affect what you need.
Sensor delay. Android provides four different sensor delay modes: SENSOR_DELAY_UI, SENSOR_DELAY_NORMAL, SENSOR_DELAY_GAME, and SENSOR_DELAY_FASTEST, where SENSOR_DELAY_UI has the longest interval between two data points and SENSOR_DELAY_FASTEST has the shortest. The shorter the interval the higher data sampling rate (number of samples per second). Higher sampling rate gives you more "responsive" data, but comes with greater noise, while lower sampling rate gives you more "laggy" data, but more smooth.
Noise filtering. With the above in mind, you need to decide which route you want to take. Does your application need fast response? If it does, you probably want to choose a higher sampling rate. Does your application need smooth data? I guess this is obviously YES given the context of the question, which means you need noise filtering. For sensor data, noise is mostly high frequency in nature (noise value oscillates very fast with time). So a low pass filter (LPF) is generally adequate.
A simple way to implement LPF is exponential smoothing. To integrate with your code:
int orientation = <init value>;
float update_rate = <value between 0 to 1>;
myOrientationEventListener = new OrientationEventListener(this, SensorManager.SENSOR_DELAY_NORMAL) {
#Override
public void onOrientationChanged(int arg0) {
orientation = (int)(orientation * (1f - update_rate) + arg0 * update_rate);
Log.i("orientation", "orientation:" + orientation);
}
};
Larger update_value means the resulting data is less smooth, which should be intuitive: if update_value == 1f, it falls back to your original code. Another note about update_value is it depends on the time interval between updates (related to sensor delay modes). You probably can tune this value to find one works for you, but if you want to know exactly how it works, check the alpha value definition under Electronic low-pass filters -> Discrete-time realization.
I had a similar problem showing an artificial horizon on my device. The low pass filter (LPF) solved this issue.
However you need to consider when you use the orientation angle in degrees and apply the LPF on it blindly, the result is faulty when the device is in portrait mode and turned from left to ride or opposite. The reason for this is the shift between 359 and 0 degree. Therefore I recommend to convert the degree into radians and apply the LPF on the sin and cos values of the orientation angle.
Further I recommend to use a dynamic alpha or update rate for the LPF. A static value for the alpha might be perfect on your device but not on any other.
The following class filters based on radians and uses a dynamic alpha as described above:
import static java.lang.Math.*;
Filter {
private static final float TIME_CONSTANT = .297f;
private static final float NANOS = 1000000000.0f;
private static final int MAX = 360;
private double alpha;
private float timestamp;
private float timestampOld;
private int count;
private int values[];
Filter() {
timestamp = System.nanoTime();
timestampOld = System.nanoTime();
values = new int[0];
}
int filter(int input) {
//there is no need to filter if we have only one
if(values.length == 0) {
values = new int[] {0, input};
return input;
}
//filter based on last element from array and input
int filtered = filter(values[1], input);
//new array based on previous result and filter
values = new int[] {values[1], filtered};
return filtered;
}
private int filter(int previous, int current) {
calculateAlpha();
//convert to radians
double radPrev = toRadians(previous);
double radCurrent = toRadians(current);
//filter based on sin & cos
double sumSin = filter(sin(radPrev), sin(radCurrent));
double sumCos = filter(cos(radPrev), cos(radCurrent));
//calculate result angle
double radRes = atan2(sumSin, sumCos);
//convert radians to degree, round it and normalize (modulo of 360)
long round = round(toDegrees(radRes));
return (int) ((MAX + round) % MAX);
}
//dynamic alpha
private void calculateAlpha() {
timestamp = System.nanoTime();
float diff = timestamp - timestampOld;
double dt = 1 / (count / (diff / NANOS));
count++;
alpha = dt/(TIME_CONSTANT + dt);
}
private double filter(double previous, double current) {
return (previous + alpha * (current - previous));
}
}
For further readings see this discussion.
I am developing some application like Runtastic Pedometer using the algorithm but I am not getting any similarity between the results.
my code is as follows:
public void onSensorChanged(SensorEvent event)
{
Sensor sensor = event.sensor;
synchronized (this)
{
if (sensor.getType() == Sensor.TYPE_ORIENTATION) {}
else {
int j = (sensor.getType() == Sensor.TYPE_ACCELEROMETER) ? 1 : 0;
if (j == 1) {
float vSum = 0;
for (int i=0 ; i<3 ; i++) {
final float v = mYOffset + event.values[i] * mScale[j];
vSum += v;
}
int k = 0;
float v = vSum / 3;
//Log.e("data", "data"+v);
float direction = (v > mLastValues[k] ? 1 : (v < mLastValues[k] ? -1 : 0));
if (direction == - mLastDirections[k]) {
// Direction changed
int extType = (direction > 0 ? 0 : 1); // minumum or maximum?
mLastExtremes[extType][k] = mLastValues[k];
float diff = Math.abs(mLastExtremes[extType][k] - mLastExtremes[1 - extType][k]);
if (diff > mLimit) {
boolean isAlmostAsLargeAsPrevious = diff > (mLastDiff[k]*2/3);
boolean isPreviousLargeEnough = mLastDiff[k] > (diff/3);
boolean isNotContra = (mLastMatch != 1 - extType);
if (isAlmostAsLargeAsPrevious && isPreviousLargeEnough && isNotContra) {
for (StepListener stepListener : mStepListeners) {
stepListener.onStep();
}
mLastMatch = extType;
}
else {
Log.i(TAG, "no step");
mLastMatch = -1;
}
}
mLastDiff[k] = diff;
}
mLastDirections[k] = direction;
mLastValues[k] = v;
}
}
}
}
for registering sensors:
mSensorManager = (SensorManager) getSystemService(SENSOR_SERVICE);
mSensor = mSensorManager.getDefaultSensor(
Sensor.TYPE_ACCELEROMETER);
mSensorManager.registerListener(mStepDetector,mSensor,SensorManager.SENSOR_DELAY_NORMAL);
in the algorithm i have different levels for sensitivity as public void
setSensitivity(float sensitivity) {
mLimit = sensitivity; // 1.97 2.96 4.44 6.66 10.00 15.00 22.50 33.75 50.62
}
on various sensitivity level my result is:
sensitivity rantastic pedometer my app
10.00 3870 5500
11.00 3000 4000
11.15 3765 4576
13.00 2000 890
11.30 754 986
I am not getting any proper pattern to match with the requirement.
As per my analysis this application is using Sensor.TYPE_MAGNETIC_FIELD for steps calculation please let me know some algorithm so that I can meet with the requirement.
The first thing you need to do is decide on an algorithm. As far as I know there are roughly speaking three ways to detect steps using accelerometers that are described in the literature:
Use the Pythagorean theorem to calculate the magnitude of the acceleration vector of each sample from the accelerometer. Low-pass filter the magnitude signal to remove high frequency noise and then look for peaks and valleys in the filtered signal. You may need to add additional requirements to remove false positives. This is by far the simplest way to detect steps, it is also the way that most if not all ordinary pedometers of the sort that you can buy from a sports store work.
Use Pythagoras' like in (1), then run the signal through an FFT and compare the output from the FFT to known outputs of walking. This requires you to have access to a fairly large amount of training data.
Feed the accelerometer data into an algorithm that uses some suitable machine learning technique, for example a neural network or a digital wavelet transform. You can of course include other sensors in this approach. This also requires you to have access to a fairly large amount of training data.
Once you have decided on an algorithm you will probably want to use something like Matlab or SciPy to test your algorithm on your computer using recordings that you have made on Android phones. Dump accelerometer data to a cvs file on your phone, make a record of how many steps the file represents, copy the file to your computer and run your algorithm on the data to see if it gets the step count right. That way you can detect problems with the algorithm and correct them.
If this sounds difficult, then the best way to get access to good step detection is probably to wait until more phones come with the built-in step counter that KitKat enables.
https://github.com/bagilevi/android-pedometer
i hope this might be helpfull
I am using step detection in my walking instrument.
I get nice results of step detection.
I use achartengine to plot accelerometer data.
Take a look here.
What I do:
Analysis of magnitude vector for accelerometer sensor.
Setting a changeable threshold level. When signal from accelerometer is above it I count it as a step.
Setting the time of inactive state (for step detection) after first crossing of the threshold.
Point 3. is calculated:
arbitrary setting the maximum tempo of our walking (e.g. 120bpm)
if 60bpm - 1000msec per step, then 120bpm - 500msec per step
accelerometer passes data with certain desired frequency (SENSOR_DELAY_NORMAL, SENSOR_DELAY_GAME, etc.). When DELAY_GAME: T ~= 20ms (this is included in Android documentation)
n - samples to omit (after passing the threshold)
n = 500msec / T
n = 500 / 20 = 25 (plenty of them. You can adjust this value).
after that, the threshold becomes active.
Take a look at this picture:
This is my realization. It was written about 1.5-2 years ago. And I really don't remember all this stuff that I wrote. But it worked. And it worked good for my needs.
I know that this is really big class (some methods are deleted), but may be it will be helpful. If not, I'll just remove this answer...
public class StepDetector implements SensorEventListener
{
public static final int MAX_BUFFER_SIZE = 5;
private static final int Y_DATA_COUNT = 4;
private static final double MIN_GRAVITY = 2;
private static final double MAX_GRAVITY = 1200;
public void onSensorChanged(final SensorEvent sensorEvent)
{
final float[] values = sensorEvent.values;
final Sensor sensor = sensorEvent.sensor;
if (sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD)
{
magneticDetector(values, sensorEvent.timestamp / (500 * 10 ^ 6l));
}
if (sensor.getType() == Sensor.TYPE_ACCELEROMETER)
{
accelDetector(values, sensorEvent.timestamp / (500 * 10 ^ 6l));
}
}
private ArrayList<float[]> mAccelDataBuffer = new ArrayList<float[]>();
private ArrayList<Long> mMagneticFireData = new ArrayList<Long>();
private Long mLastStepTime = null;
private ArrayList<Pair> mAccelFireData = new ArrayList<Pair>();
private void accelDetector(float[] detectedValues, long timeStamp)
{
float[] currentValues = new float[3];
for (int i = 0; i < currentValues.length; ++i)
{
currentValues[i] = detectedValues[i];
}
mAccelDataBuffer.add(currentValues);
if (mAccelDataBuffer.size() > StepDetector.MAX_BUFFER_SIZE)
{
double avgGravity = 0;
for (float[] values : mAccelDataBuffer)
{
avgGravity += Math.abs(Math.sqrt(
values[0] * values[0] + values[1] * values[1] + values[2] * values[2]) - SensorManager.STANDARD_GRAVITY);
}
avgGravity /= mAccelDataBuffer.size();
if (avgGravity >= MIN_GRAVITY && avgGravity < MAX_GRAVITY)
{
mAccelFireData.add(new Pair(timeStamp, true));
}
else
{
mAccelFireData.add(new Pair(timeStamp, false));
}
if (mAccelFireData.size() >= Y_DATA_COUNT)
{
checkData(mAccelFireData, timeStamp);
mAccelFireData.remove(0);
}
mAccelDataBuffer.clear();
}
}
private void checkData(ArrayList<Pair> accelFireData, long timeStamp)
{
boolean stepAlreadyDetected = false;
Iterator<Pair> iterator = accelFireData.iterator();
while (iterator.hasNext() && !stepAlreadyDetected)
{
stepAlreadyDetected = iterator.next().first.equals(mLastStepTime);
}
if (!stepAlreadyDetected)
{
int firstPosition = Collections.binarySearch(mMagneticFireData, accelFireData.get(0).first);
int secondPosition = Collections
.binarySearch(mMagneticFireData, accelFireData.get(accelFireData.size() - 1).first - 1);
if (firstPosition > 0 || secondPosition > 0 || firstPosition != secondPosition)
{
if (firstPosition < 0)
{
firstPosition = -firstPosition - 1;
}
if (firstPosition < mMagneticFireData.size() && firstPosition > 0)
{
mMagneticFireData = new ArrayList<Long>(
mMagneticFireData.subList(firstPosition - 1, mMagneticFireData.size()));
}
iterator = accelFireData.iterator();
while (iterator.hasNext())
{
if (iterator.next().second)
{
mLastStepTime = timeStamp;
accelFireData.remove(accelFireData.size() - 1);
accelFireData.add(new Pair(timeStamp, false));
onStep();
break;
}
}
}
}
}
private float mLastDirections;
private float mLastValues;
private float mLastExtremes[] = new float[2];
private Integer mLastType;
private ArrayList<Float> mMagneticDataBuffer = new ArrayList<Float>();
private void magneticDetector(float[] values, long timeStamp)
{
mMagneticDataBuffer.add(values[2]);
if (mMagneticDataBuffer.size() > StepDetector.MAX_BUFFER_SIZE)
{
float avg = 0;
for (int i = 0; i < mMagneticDataBuffer.size(); ++i)
{
avg += mMagneticDataBuffer.get(i);
}
avg /= mMagneticDataBuffer.size();
float direction = (avg > mLastValues ? 1 : (avg < mLastValues ? -1 : 0));
if (direction == -mLastDirections)
{
// Direction changed
int extType = (direction > 0 ? 0 : 1); // minumum or maximum?
mLastExtremes[extType] = mLastValues;
float diff = Math.abs(mLastExtremes[extType] - mLastExtremes[1 - extType]);
if (diff > 8 && (null == mLastType || mLastType != extType))
{
mLastType = extType;
mMagneticFireData.add(timeStamp);
}
}
mLastDirections = direction;
mLastValues = avg;
mMagneticDataBuffer.clear();
}
}
public static class Pair implements Serializable
{
Long first;
boolean second;
public Pair(long first, boolean second)
{
this.first = first;
this.second = second;
}
#Override
public boolean equals(Object o)
{
if (o instanceof Pair)
{
return first.equals(((Pair) o).first);
}
return false;
}
}
}
One main difference I spotted between your implementation and the code in the grepcode project is the way you register the listener.
Your code:
mSensorManager.registerListener(mStepDetector,
mSensor,
SensorManager.SENSOR_DELAY_NORMAL);
Their code:
mSensorManager.registerListener(mStepDetector,
mSensor,
SensorManager.SENSOR_DELAY_FASTEST);
This is a big difference. SENSOR_DELAY_NORMAL is intended for orientation changes, and is therefor not that fast (ever noticed that it takes some time between you rotating the device, and the device actually rotating? That's because this is some functionality that does not need to be super fast (that would probably be pretty annoying even). The rate at which you get updates is not that high).
On the other hand, SENSOR_DELAY_FASTEST is intended for things like pedometers: you want the sensor data as fast and often as possible, so your calculations of steps will be as accurate as possible.
Try to switch to the SENSOR_DELAY_FASTEST rate, and test again! It should make a big difference.
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType()==Sensor.TYPE_ACCELEROMETER ){
float x = event.values[0];
float y = event.values[1];
float z = event.values[2];
currentvectorSum = (x*x + y*y + z*z);
if(currentvectorSum < 100 && inStep==false){
inStep = true;
}
if(currentvectorSum > 125 && inStep==true){
inStep = false;
numSteps++;
Log.d("TAG_ACCELEROMETER", "\t" + numSteps);
}
}
}
I'm trying to learn how to use the accelerometer by creating (what I thought would be) a simple app to mimic a crude maraca.
The objective is that when the phone is flicked downwards quickly, it emits a maraca sound at the end of that flick, and likewise a different sound is emitted at the end of an upward flick.
The strategy for implementing this is to detect when the acceleration passes over a certain threshold. When this happens, ShakeIsHappening is set to true, and the data from the z axis is fed into an array. A comparison is made to see whether the first position in the z array is greater or lesser than the most recent position, to see whether the phone has been moved upwards or downwards. This is stored in a boolean called zup.
Once the acceleration goes below zero, we assume the flick movement has ended and emit a sound, chosen depending on whether the movement was up or down (zup).
Here is the code:
public class MainActivity extends Activity implements SensorEventListener {
private float mAccelNoGrav;
private float mAccelWithGrav;
private float mLastAccelWithGrav;
ArrayList<Float> z = new ArrayList<Float>();
public static float finalZ;
public static boolean shakeIsHappening;
public static int beatnumber = 0;
public static float highZ;
public static float lowZ;
public static boolean flick;
public static boolean pull;
public static SensorManager sensorManager;
public static Sensor accelerometer;
private SoundPool soundpool;
private HashMap<Integer, Integer> soundsMap;
private boolean zup;
private boolean shakeHasHappened;
public static int shakesound1 = 1;
public static int shakesound2 = 2;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
results = (TextView) findViewById(R.id.results);
clickresults = (TextView) findViewById(R.id.clickresults);
sensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
accelerometer = sensorManager
.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
mAccelNoGrav = 0.00f;
mAccelWithGrav = SensorManager.GRAVITY_EARTH;
mLastAccelWithGrav = SensorManager.GRAVITY_EARTH;
soundpool = new SoundPool(4, AudioManager.STREAM_MUSIC, 100);
soundsMap = new HashMap<Integer, Integer>();
soundsMap.put(shakesound1, soundpool.load(this, R.raw.shake1, 1));
soundsMap.put(shakesound2, soundpool.load(this, R.raw.shake1, 1));
}
public void playSound(int sound, float fSpeed) {
AudioManager mgr = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
float streamVolumeCurrent = mgr.getStreamVolume(AudioManager.STREAM_MUSIC);
float streamVolumeMax = mgr.getStreamMaxVolume(AudioManager.STREAM_MUSIC);
float volume = streamVolumeCurrent / streamVolumeMax;
soundpool.play(soundsMap.get(sound), volume, volume, 1, 0, fSpeed);
}
#Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
#Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {
}
#Override
public void onSensorChanged(SensorEvent event) {
float x = event.values[0];
float y = event.values[1];
z.add((event.values[2])-SensorManager.GRAVITY_EARTH);
mLastAccelWithGrav = mAccelWithGrav;
mAccelWithGrav = android.util.FloatMath.sqrt(x * x + y * y + z.indexOf(z.size()-1) * z.indexOf(z.size()-1));
float delta = mAccelWithGrav - mLastAccelWithGrav;
mAccelNoGrav = mAccelNoGrav * 0.9f + delta; // Low-cut filter
if (mAccelNoGrav > 3) {
shakeIsHappening = true;
z.clear();
}
if (mAccelNoGrav < 0) {
if (shakeIsHappening) {
shakeIsHappening = false;
shakeHasHappened = true;
}
}
if (shakeIsHappening && z.size() != 0) {
if (z.get(z.size()-1) > z.get(0)) {
zup = true;
} else if (z.get(0) > z.get(z.size()-1)) {
zup = false;
}
}
if (shakeHasHappened) {
Log.d("click", "up is" + zup + "Low Z:" + z.get(0) + " high Z:" + z.get(z.size()-1));
if (!zup) {
shakeHasHappened = false;
playSound(shakesound2, 1.0f);
z.clear();
} else if (zup) {
shakeHasHappened = false;
playSound(shakesound1, 1.0f);
z.clear();
}
}
}
Some of the problems I'm having are:
I think ShakeHasHappened kicks in when deceleration starts, when acceleration goes below zero. Perhaps this should be when deceleration stops, when acceleration has gone negative and is now moving back towards zero. Does that sound sensible?
The way of detecting whether the motion is up or down isn't working - is this because I'm not getting an accurate reading of where the phone is when looking at the z axis because the acceleration is also included in the z-axis data and therefore isn't giving me an accurate position of the phone?
I'm getting lots of double clicks, and I can't quite work out why this is. Sometimes it doesn't click at all.
If anyone wants to have a play around with this code and see if they can find a way of making it more accurate and more efficient, please go ahead and share your findings. And if anyone can spot why it's not working the way I want it to, again please share your thoughts.
To link sounds to this code, drop your wav files into your res\raw folder and reference them in the R.raw.shake1 bit (no extension)
Thanks
EDIT: I've done a bit of research and have stumbled across something called Dynamic Time Warping. I don't know anything about this yet, but will start to look in to it. Does anyone know if DTW could be a different method of achieving a working maraca simulator app?
I can give you some pointers on this:
First of all, I noticed that you're using the same resource for both outcomes:
soundsMap.put(shakesound1, soundpool.load(this, R.raw.shake1, 1));
soundsMap.put(shakesound2, soundpool.load(this, R.raw.shake1, 1));
The resource in case of shakesound2 should be R.raw.shake2.
Second, the following only deals with one of the motions:
if (mAccelNoGrav > 3)
This should be changed to:
if (mAccelNoGrav > 3 || mAccelNoGrav < -3)
Currently, you are not intercepting downward motion.
Third, acceleration value of 3 is rather low. If you want to avoid/filter-out normal arm movement, this value should be around 6 or 7 and -6 or -7.
Fourth, you do not need to store z values to check whether the motion was up or down. You can check whether:
mAccelnoGrav > 6 ===> implies motion was upwards
mAccelnoGrav < -6 ===> implies motion was downwards
You can use this information to set zup accordingly.
Fifth: I can only guess that you are using if (mAccelNoGrav < 0) to play the sound when the motion ends. In that case, this check should be changed to:
if (mAccelNoGrav < epsilon || mAccelNoGrav > -epsilon)
where epsilon is some range such as (-1, 1).
Sixth, you should include a lockout period in your application. This would be the period after all conditions have been met and a sound is about to be played. For the next, say 1000 ms, don't process the sensor values. Let the motion stabilize. You'll need this to avoid getting multiple playbacks.
Note: Please include comments in your code. At the very least, place comments on every block of code to convey what you are trying to accomplish with it.
I tried to implement it by myself a time ago, and ended up using this solution.-
http://jarofgreen.co.uk/2013/02/android-shake-detection-library/
based on the same concept in your question.
The below code is my attempt to send mMyView to the front or the back of the set of children of mPivotParent so it will be rendered on top or behind the others. Hiding the view will not suffice in my case.
mPivotParent is a FrameLayout.
Looking at mPivotParent.mChildren shows that my code below "works" in that the ordering is being set correctly. Yet it has no impact on the z order. Not only this, but the framerate gets cumulatively slower and slower the more times the repositioning code gets called. There are 4 children total and mPivotParent.mChildrenCount remains at 4 throughout as expected.
I'm targeting API Level 7.
#Override
public boolean onTouchEvent(MotionEvent event) {
Display display = getWindowManager().getDefaultDisplay();
float x = event.getRawX();
float sWidth = (int)display.getWidth();
float xLerpFromCenter = ((x / sWidth) - .5f) * 2.f; // [-1, 1]
mRotateAnimation.mfLerp = xLerpFromCenter;
if(xLerpFromCenter < -0.2f && mPivotParent.getChildAt(0) != mMyView)
{
mPivotParent.removeView(mMyView);
mPivotParent.addView(mMyView, 0);
refreshEverything();
}
else if(xLerpFromCenter > 0.2f && mPivotParent.getChildAt(0) == mMyView)
{
mPivotParent.removeView(mMyView);
mPivotParent.addView(mMyView, mPivotParent.getChildCount() - 1);
refreshEverything();
}
return super.onTouchEvent(event);
}
private void refreshEverything()
{
for(int i = 0; i < mPivotParent.getChildCount(); ++i)
{
mPivotParent.getChildAt(i).invalidate();
mPivotParent.getChildAt(i).requestLayout();
}
mPivotParent.invalidate();
mPivotParent.requestLayout();
}
Partial Solution
Here's a somewhat inefficient hack but it works for my purpose, which is to take the top item and send it to the back, keeping all other items in their same order.
private void putFrontAtBack()
{
for(int i = 0; i < mPivotParent.getChildCount() - 1; ++i)
{
mPivotParent.getChildAt(0).bringToFront();
}
refreshEverything();
}
Note: This doesn't work in the general case of arbitrary re-ordering.
Try this.
private void reorder(int[] order)
{
if(order == null || order.length != mPivotParent.getChildCount()) return;
for(int i = order.length - 1; i >= 0; i--)
{
mPivotParent.getChildAt(order[i]).bringToFront();
}
refreshEverything();
}
This code provides arbitrary reordering. The integer array "order" is used to indicate the new order of each view, where order[n]=x means the new order of childAt(x) is n.