I want to create an android app that involves keeping track of your steps however it will be crucial to the app that the person will not be able to increase the steps taken by shaking their phone. Is there any method to make sure that the person is actually moving and not just shaking their phone
Related
i developing a game on Android, and it's requirement is :
normal state app will show waiting screen
when have person view device game will start
when person leave game auto close and return to waiting screen
After researching i found method: using vision API services detect face to start game when user view & stop when user leave device. i able to do it, but problem is this solution made game very slow i think because face detect always running .
my question is have any other solution with best performance to detect person view/playing on device and don't effect to main program.
Thanks you.
If it is necessary for you to detect a face then I'm afraid you can only do the vision api, otherwise if you only need to detect if there is someone in front of the phone, then look into using the proximity sensor on android. Not sure if it is the most effective way, but it would be the best candidate solution I could think of.
Here's a reference on the usage of the proximity sensor
I've developed a fitness app which uses Google Fit API to capture the users steps, calories and exercises minutes.
After rolling out the app, I've received a number of feedback that the steps fluctuate( e.g one moment it increases to 2300, next moment it drops to 1900). When the user cross check against the Google Fit app, they see the same.
I am using the Recording API to get the latest steps. I was only able too simulate this fluctuation when i shake the phone continuously ( instead of walking) in the attempt to increase the steps.
What I notice was the steps increase but within moments it drop back to the original step before i start shaking the phone. I was wondering if Google Fit some how detected that i was "cheating" and reverted the steps. Right after that when i walk normally, the steps still reverted. I takes a while before Fit begin to record my steps again.
However my users claimed that they did not shake the phone but instead walked normally but still face the issue.
Would really appreciate it if anyone could enlighten me as i couldn't find any post on this. TIA!
I've been exploring 3D scanning and reconstruction using Google's project Tango.
So far, some apps I've tried like Project Tango Constructor and Voxxlr do a nice job over short time-spans (I would be happy to get recommendations for other potential scanning apps). The problem is, regardless of the app, if I run it long enough the scans accumulate so much drift that eventually everything is misaligned and ruined.
High chance of drift also occurs whenever I point the device over a featureless space like a blank wall, or when I point the cameras upward to scan ceilings. The device gets disoriented temporarily, thereby destroying the alignment of future scans. Whatever the case, getting the device to know where it is and what it is pointing at is a problem for me.
I know that some of the 3D scanning apps use Area Learning to some extent, since these apps ask me for permission to allow area learning upon startup of the app. I presume that this is to help localize the device and stabilize its pose (please correct me if this is inaccurate).
From the apps I've tried, I have never been given an option to load my own ADF. My understanding is that loading in a carefully learned feature-rich ADF helps to better anchor the device pose. Is there a reason for this dearth of apps that allow users to load in their homemade ADFs? Is it hard/impossible to do? Are current apps already optimally leveraging on area learning to localize, and is it the case that no self-recorded ADF I provide could ever do better?
I would appreciate any pointers/instruction on this topic - the method and efficacy of using ADFs in 3D scanning and reconstruction is not clearly documented. Ultimately, I'm looking for a way to use the Tango to make high quality 3D scans. If ADFs are not needed in the picture, that's fine. If the answer is that I'm endeavoring on an impossible task, I'd like to know as well.
If off-the-shelf solutions are not yet available, I am also willing to try to process the point cloud myself, though I have a feeling its probably much easier said than done.
Unfortunately, Tango doesn't have any application could do this at the moment, you will need to develop you own application for this. Just in case you wonder how to do this in code, here are the steps:
First, the learning mode of the application should be on. When we turn the learning mode on, the system will start to record an ADF, which allows the application to see a existing area that it has been to. For each point cloud we have saved, we should save the timestamp that associated with points as well.
After walking around and collecting the points, we will need to call the TangoService_saveAreaDescription function from the API. This step does some optimization on each key poses saved in the system. After done saving, we need to use the timestamp saved with point cloud to query to optimized pose again, to do that, we use the functionTangoService_getPoseAtTime. After this step, you will see the point cloud set to the right transformation, and the points will be overlapped together.
Just as a recap of the steps:
Turn on learning mode in Tango config.
Walk around, save point cloud along with the timestamp associated with the point cloud.
Call save TangoService_saveAreaDescription function.
After saving is done, call TangoServcie_getPoseAtTime to query the optimized pose based on the timestamp saved with the point cloud.
I am biggener in android. I am trying to implement a fitness app that can keep track of the running speed and running distance in Android. How can i calculate the above mentioned things ?
In theory you could analyse windows of accelerometer data and count the number of peaks and the forces of those to determine running. Then, if the user has entered an average step distance, that could give an equation of distance.
Would be a lot easier using GPS as it provides the speed directly.
You might be interested in this library: https://github.com/mcharmas/Android-ReactiveLocation I recently added Activity Recognition, which can tell you whenever a user starts running. Might take a little while from one begins to run before the phone 'knows' that as being the activity, though.
Is there any way to make this kind of app:
you draw a room with precision in the program, put your actual position(where is your phone right now) point in this representation of your room in the app and then when your phone is moving the app would say precisely where it is in the room?
You are going to have issues with most phones because the GPS isn't going to give you a fine enough location for what your trying.
http://www.mio.com/technology-gps-accuracy.htm