I'm developing an app that essentially is Paint style. The user touchs
the screen and can draw images. However I would like to measure the
speed of the user's movements. At the moment I take the distance
between the X and Y coordinates of the event.getX/Y and the previous
values and calculate the difference. This is directly proportional to
the speed of the movement provided that the timing intervals of the
onTouchListener are constant. Is this the case for Android? I know for
iPhone the Listener actually changes it's frequency depending on the
input.
Furthermore if it is a constant value, does anyone know what it is? So
I can portray the speed in useful units (mm/sec) rather than an
arbitrary value.
I don't think it's the same for Android (but I have to say I'm not 100% sure). But since this could potentially differ even between manufacturers, could you not look at System.currentTimeMillis() on the two events, to calculate the speed yourself?
Sure this is a decade old question, but I have a specific answer. Following the TutorialPoint's Touch Event Tutorial, and modifying the textView size to fill the screen, I tested the callback frequency by logging the times it was called before release. I first held my finger in one position for exactly 10 seconds which resulted in 19 and 9 onTouchListener()->onTouch() event calls in two trials. Moving my finger as fast as I could back and forth resulted in counts within a few counts off of 600 for a 10 second time period in two attempts.
This equates to an exact maximum of 60 Hz for my S8+.
I logged my coordinates, exported them into Excel and did a scatter plot. Faster movement correlated with points becoming more sparse. In my application I want to have a user draw a wave to play as a tone for experimenting with different wave forms and how they sound. For good sound quality, I need at least 20 KHz, with a target of 40.1 KHz.
Related
I'm trying to capture images with 30 seconds exposure times in my app (I know it's possible since the stock camera allows it).
But SENSOR_INFO_EXPOSURE_TIME_RANGE (which it's supposed to be in nanoseconds) gives me the range :
13272 - 869661901
in seconds it would be just
0.000013272 - 0.869661901
Which obviously is less than a second.
How can I use longer exposure times?
Thanks in advance!.
The answer to your question:
You can't. You checked exactly the right information and interpreted it correctly. Any value you set for the exposure time longer than that will be clipped to that max amount.
The answer you want:
You can still get what you want, though, by faking it. You want 30 continuous seconds' worth of photons falling on the sensor, which you can't get. But you can get something (virtually) indistinguishable from it by accumulating 30 seconds' worth of photons with tiny missing intervals interspersed.
At a high level, what you need to do is create a List of CaptureRequests and pass it to CameraCaptureSession.captureBurst(...). This will take the shots with as minimal an interstitial time as possible. When each frame of image data is available, pass it to some new buffer somewhere and accumulate the information (simple point-wise addition). This is probably most properly done with an Allocation as the output Surface and some RenderScript.
Notes on data format:
The right way to do this is to use the RAW_SENSOR output format if you can. That way the accumulated output really is directly proportional to the light that was incident to the sensor over the whole 30s.
If you can't use that, for some reason, I would recommend using YUV_420_888 output, and make sure you set the tone map curve to be linear (unfortunately you have to do this manually by creating a curve with two points). Otherwise the non-linearity introduced will ruin our scheme. (Although I'm not sure simple addition is exactly right in a linear YUV space, but it's a first approach at least.) Whether you use this approach or RAW_SENSOR, you'll probably want to apply your own gamma curve/tone map after accumulation to make it "look right."
For the love of Pete don't use JPEG output, for many reasons, not the least of which is that this will most likely add a LOT of interstitial time between exposures, thereby ruining our approximation of 30s on continuous exposure.
Note on exposure equivalence:
This will produce almost exactly the exposure you want, but not quite. It differs in two ways.
There will be small missing periods of photon information in the middle of this chunk of exposure time. But on the time scale you are talking about (30s), missing a few milliseconds of light here and there is trivial.
The image will be slightly nosier than if you had taken a true single exposure of 30s. This is because each time you read out the pixel values from the actual sensor, a little electronic noise gets added to the information. So in the end you'll have 35 times as much of this additive noise (from the 35 exposures for your specific problem) as a single exposure would. There's no way around this, sorry, but it might not even be noticeable- this is usually fairly small relative to the meaningful photographic signal. It depends on the camera sensor quality (and ISO, but I imagine for this application you need that to be high.)
(Bonus!) This exposure will actually be superior in one way: Areas that might have been saturated (pure white) in a 30s exposure will still retain definition in these far shorter exposures, so you're basically guaranteed not to lose your high end details. :-)
You can't always trust SENSOR_INFO_EXPOSURE_TIME_RANGE as of May 2017. Try manually increasing the time and see what happens. I know my Pixel will actually take a 1.9 sec shot but SENSOR_INFO_EXPOSURE_TIME_RANGE has a value in the sub second range.
I am working an a bike computer app. I was hoping to work out the inclination of the slope using the accelerometer but things are not working too well.
I have put in test code getting the sensor data I am just smapeling at the UI rate and keeping a moving average over 128 samples which is about 6 seconds worth. With the phone in hand the data is good and I can calculate a good angle compared to my calibration flat vector.
With the phone mounted on the bike things are not at all good. I expect to get a good bit of noise but I was hoping that the large number of samples over the big time window would remove the vibration effects and general bike movements. Unfortunately this just is not working, the magnitude of the acceleration vector is not really staying around the 9.8 mark but is dropping lower which indicates to me that something is not right somewhere.
Here is a plot of the data from part of a test ride.
As you can see when stationary at the start the magnitude is OK but once I get going it drops. I am fairly sure the problem is vibration related I initially descend and there was heavy vibration I then climb and the vibration is less and the magnitude gets back towards 9.8 but then I drop down quickly on a bad road and the magnitude ends up less than 3.
This is with a SonyErricson Xperia Active which uses a BMA250 sensor the datasheat looks like the sensor should be capable. My only theory for the cause of the problem is that the range is set to the 2g range and the vibration is causing data to go out of range and this is causing my problems.
Has anyone seen anything like this?
Has anyone got any ideas on the cause of the problem?
Is there any way to change the sensitivity that I have not found?
Additional information.
OK I logged the raw sensor data before my filtering. A very small portion presented here
The major axis is in green and on the flat as I belive this should be without the vibration it should be about 8.5. There is no obvious clamping on the data but I get more below 8.5 values than above 8.5 values. Even if the sensor is set up for it's most sensative 2g range it looks like the vibration shgould be OK I have a max value here of just over 15 and a minimum of -10 well ib a +- 20 ragnge just not centered correctly on the 8.5 it should be.
I will dig out my other phone which looks to have a slightly different sensor a BMA150 and try with that but unless it is perfect I think I will have to give up on the idea.
I suspect the accelerometer is not linear over such large G ranges. If so, and if there is any asymmetry, it will do what you see.
The solution for that is to pad the accelerometer mount a bit more, foam rubber, bungy-cord, whatever, possibly mount it on a heavier stage to filter the vibration more.
Or (not a good solution) try to model the error and compensate for it.
I used the same handset and by coincidence the same averaging interval of 6 seconds for an application a few years ago and I don't recall seeing the behaviour in the graph.
I'm wondering whether the issue is in the way the 6 second averages are being accumulated. One problem I had is that the sampling interval was not constant but depends on how busy the processor is. A sample is acquired in the specified time but the calling of the event handler depends on the scheduler. When the processor is unloaded sampling occurs at a constant frequency but as the processor works harder the sampling frequency becomes slower and more erratic. You can write your app to keep processor load low while sampling. What we did is sample for 6 seconds, doing nothing else, then stop sampling and process the last sample set but this was only partially successful as you can't control other apps running at the same time and the scheduler is sharing processor resources across them all. On the Xperia Active I found it can occasionally go out to seconds between samples which I attributed to garbage collection in one of the JVMs. The solution for us was to time stamp each sample then run some quality checks over a sample set and discard those that failed the quality check. This is a poor solution as defining what is good enough is imprecise and when the user runs another app that uses a lot of resources most sample sets can be discarded so the app needs additional logic to handle that.
The current Android API, unavailable on the Xperia Active, should have eliminated this as samples can be batched as described at https://source.android.com/devices/sensors/hal-interface.html#batch_sensor_flags_sampling_period_maximum_report_latency .
If the algorithm assumed a particular number of samples rather than counting them and the processor worked harder as the bike went faster, though I'm not sure why it would, it would produce something like the first graph because when the bike is going downhill magnitude goes down and when going up hill it goes up. There is a lot of speculation there but a 6 second average giving a magnitude of less than 3 m/s^2 looks implausible from my experience with this sensor.
I'm developing an app for Android (API level 11), that reads some data from a device, one chunk a second. I want to plot the data, x-axis being time in seconds and y-axis being the values. One of the requirements for the program is that I must show the results of one hour of observations on one screen without any scrolling, so I must fit 3600 points on the screen. It doesn't matter that the points overlap.
I don't know how to plot the data fast enough.
I'm using androidplot, and things get really slow in approximately 30 minutes, as androidplot redraws the same points over and over again.
I tried to make some changes to it (as the sources can be freely downloaded), but in vain.
Can I somehow cache the previously drawn image and add only new points to it?
Is there something like WinAPI BitBlt() function in Android?
I am working on an application where I would like to track the position of a mobile user inside a building where GPS is unavailable. The user starts at a well known fixed location (accurate to within 5 centimeters), at which point the accelerometer in the phone is to be activated to track any further movements with respect to that fixed location. My question is, in current generation smart phones (iphones, android phones, etc), how accurately can one expect to be able to track somebodies position based on the accelerometer these phones generally come equip with?
Specific examples would be good, such as "If I move 50 meters X from the starting point, 35 meters Y from the starting point and 5 meters Z from the starting point, I can expect my location to be approximated to within +/- 80 centimeters on most current smart phones", or whatever.
I have only a superficial understanding of techniques like Kalman filters to correct for drift, though if such techniques are relevant to my application and someone wants to describe the quality of the corrections I might get from such techniques, that would be a plus.
If you integrate the accelerometer values twice you get position but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20.
I answered a similar question.
I don't know if this thread is still open or even if you are still attempting this approach, but I could at least give an input into this, considering I tried the same thing.
As Ali said.... it's horrible! the smallest measurement error in accelerometers turn out to be rediculess after double integration. And due to constant increase and decrease in acceleration while walking (with each foot step in fact), this error quickly accumulates over time.
Sorry for the bad news. I also didn't want to believe it, till trying it self... filtering out unwanted measurements also doesn't work.
I have another approach possibly plausible, if you're interested in proceeding with your project. (approach which I followed for my thesis for my computer engineering degree)... through image processing!
You basically follow the theory for optical mice. Optical flow, or as called by a view, Ego-Motion. The image processing algorithms implemented in Androids NDK. Even implemented OpenCV through the NDK to simplify algorithms. You convert images to grayscale (compensating for different light entensities), then implement thresholding, image enhancement, on the images (to compensate for images getting blurred while walking), then corner detection (increase accuracy for total result estimations), then template matching which does the actual comparing between image frames and estimates actual displacement in amount of pixels.
You then go through trial and error to estimate which amount of pixels represents which distance, and multiply with that value to convert pixel displacement into actual displacement. This works up till a certain movement speed though, the real problem being camera images still getting too blurred for accurate comparisons due to walking. This can be improved by setting camera shutterspeeds, or ISO (I'm still playing around with this).
So hope this helps... otherwise google for Egomotion for real-time applications. Eventually you'll get the right stuff and figure out the jibberish I just explained to you.
enjoy :)
The optical approach is good, but OpenCV provides a few feature transforms. You then feature match (OpenCV provides this).
Without having a second point of reference (2 cameras) you can't reconstruct where you are directly because of depth. At best you can estimate a depth per point, assume a motion, score the assumption based on a few frames and re-guess at each depth and motion till it makes sense. Which isn't that hard to code but it isn't stable, small motions of things in the scene screw it up. I tried :)
With a second camera though, it's not that hard at all. But cell phones don't have them.
Typical phone accelerometer chips resolve +/- 2g # 12 bits providing 1024 bits over full range or 0.0643 ft/sec^2 lsb. The rate of sampling depends on clock speeds and overall configuration. Typical rates enable between one and 400 samples per second, with faster rates offering lower accuracy. Unless you mount the phone on a snail, displacement measurement likely will not work for you. You might consider using optical distance measurement instead of a phone accelerometer. Check out Panasonic device EKMB1191111.
So, I've been struggling with this problem for some time, and haven't had any luck tapping the wisdom of the internets and related SO posts on the subject.
I am writing an Android app that uses the ubiquitous Accelerometer, but I seem to be getting an incredible amount of "noise" even while at rest, and can't seem to figure out how to deal with it as my readings need to be relatively accurate. I thought that maybe my phone (HTC Incredible) was dysfunctional, but the sensor seems to work well with other games and apps I've played.
I've tried to use various "filters" but I can't seem to wrap my mind around them. I understand that gravity must be dealt within some way, and maybe that's where I am going wrong. Currently I have tried this, adapted from a SO answer, which refers to an example from the iPhone SDK:
accel[0] = event.values[0] * kFilteringFactor + accel[0] * (1.0f - kFilteringFactor);
accel[1] = event.values[1] * kFilteringFactor + accel[1] * (1.0f - kFilteringFactor);
double x = event.values[0] - accel[0];
double y = event.values[1] - accel[1];
The poster says to "play with" the kFilteringFactor value (kFilteringFactor = 0.1f in the example) until satisfied. Unfortunately I still seem to get a lot of noise, and all this seems to do is make the readings come in as tiny decimals, which doesn't help me all that much, and it appears to just make the sensor less sensitive. The math centers of my brain are also atrophied from years of neglect, so I don't completely understand how this filter is working.
Can someone explain to me in some detail how to go about getting a useful reading from the accelerometer? A succinct tutorial would be an incredible help, as I haven't found a really good one (at least aimed at my level of knowledge). I get frustrated because I feel like all of this should be more apparent to me. Any help or direction would be greatly appreciated, and of course I can provide more samples from my code if needed.
I hope I'm not asking to be spoon-fed too much; I wouldn't be asking unless I've been trying to figure it our for a while. It also looks like there is some interest from other SO members.
To get a correct reading from the accelerometer you need to use the equation speed = SQRT(x*x + y*y + z*z). Using this, when the phone is at rest the speed will be that of gravity - 9.8m/s. So if you subtract that (SensorManager.GRAVITY_EARTH) then when the phone is at rest, you will have a reading of 0 m/s. As for noise, Blrfl might be right about cheap accelerometers, even when my phone is at rest, it continuously flickers a few fractions of a metre per second. You could just set a small threshold e.g 0.4m/s and if the speed doesn't go over that, then it is at rest.
Partial answer:
Accuracy. If you're looking for high accuracy, the inexpensive accelerometers you find in handsets won't cut the mustard. For comparison, a three-axis sensor suitable for industrial or scientific use runs north of $1,500 for just the sensor; adding the hardware to power it and turn its readings into something a computer can use doubles the price. The sensor in a handset runs well below $5 in quantity.
Noise. Cheap sensors are inaccurate, and inaccuracy translates to noise. An inaccurate sensor that isn't moving won't always show zeros, it will show values on either side within some range. About the best you can do is characterize the sensor while motionless to get some idea how noisy it is and use that to round your measurements to a less-precise scale based on expected error. (In other words, If it's within ±x m/s^2 of zero, it's safe to say the sensor's not moving, but you can't be precisely sure because it could be moving very slowly.) You'll have to do this on every device, because they don't all use the same accelerometer and they all behave differently. I guess that's one advantage the iPhone has: the hardware's pretty much homogeneous.
Gravity. There's some discussion in the SensorEvent documentation about factoring gravity out of what the accelerometer says. You'll notice it bears a lot of similarity to the code you posted, except that it's clearer about what it's doing. :-)
HTH.
How do you deal with jitteriness? You smooth the data. Instead of looking at the sequence of values from the sensor as your values, you average them on an ongoing basis, and the new sequence formed become the values you use. This moves each jittery value closer to the moving average. Averaging necessarily gets rid of quick variations in adjacent values.. and is why people use the terminology Low (frequency) Pass filtering since data that originally may have varied a lot per sample (or unit time) now varies more slowly.
eg, instead of using values 10 6 7 11 7 10, you can average these in many ways. For example, we can compute the next value from an equal weight of the running average (ie, of your last processed data point) with the next raw data point. Using a 50-50 mix for the above numbers, we'd get 10, 8, 7.5, 9.25, 8.125, 9.0675. This new sequence, our processed data, would be used in lieu of the noisy data. And we could use a different mix than 50-50 of course.
As an analogy, imagine you are reporting where a certain person is located using only your eyesight. You have a good view of the wider landscape, but the person is engulfed in a fog. You will see pieces of the body that catch your attention .. a moving left hand, a right foot, shine off eyeglasses, etc, that are jittery, BUT each value is fairly close to the true center of mass. If we run some sort of running averaging, we'd get values that approach the center of mass of that target as it moves through the fog and are in effect more accurate than the values we (the sensor) reported which was made noisy by the fog.
Now it seems like we are losing potentially interesting data to get a boring curve. It makes sense though. If we are trying to recreate an accurate picture of the person in the fog, the first task is to get a good smooth approximation of the center of mass. To this we can then add data from a complementary sensor/measuring process. For example, a different person might be up close to this target. That person might provide very accurate description of the body movements, but might be in the thick of the fog and not know overall where the target is ending up. This is the complementary position to what we first got -- the second data gives detail accurately without a sense of the approximate location. The two pieces of data would be stitched together. We'd low pass the first set (like your problem presented here) to get a general location void of noise. We'd high pass the second set of data to get the detail without unwanted misleading contributions to the general position. We use high quality global data and high quality local data, each set optimized in complementary ways and kept from corrupting the other set (through the 2 filterings).
Specifically, we'd mix in gyroscope data -- data that is accurate in the local detail of the "trees" but gets lost in the forest (drifts) -- into the data discussed here (from accelerometer) which sees the forest well but not the trees.
To summarize, we low pass data from sensors that is jittery but stays close to the "center of mass". We combine this base smooth value with data that is accurate at the detail but drifts, so this second set is high-pass filtered. We get the best of both worlds as we process each group of data to clean it of incorrect aspects. For the accelerometer, we smooth/low pass the data effectively by running some variation of a running average on its measured values. If we were treating the gyroscope data, we'd do math that effectively keeps the detail (accepts deltas) while rejecting the accumulated error that would eventually grow and corrupt the accelerometer smooth curve. How? Essentially, we use the actual gyro values (not averages), but use a small number of samples (of deltas) a piece when deriving our total final clean values. Using a small number of deltas keeps the overall average curve mostly along the same averages tracked by the low pass stage (by the averaged accelerometer data) which forms the bulk of each final data point.