Is this Fourier Analysis of Luminance Signals Correct? (Android) - android

I'm writing an Android app that measures the luminance of camera frames over a period of time and calculates a heart beat using Fourier Analysis to find the wave's frequency. The problem is that my spectral analysis looks like this:
which is pretty much the inverse of what a spectral analysis should look like (like a normal distribution). Can I accurately assess this to find the index of the maximum magnitude, or does this spectrum reveal that my data is too noisy?
EDIT:
Here's what my camera data looks like (I'm performing FFT on this):

It looks like you have two problems going on here:
1) The FFT output often places the value for negative frequencies to the right of the positive frequencies, which seems to be the case here. Therefore, you need to move the right half of the FFT to the left, and put freq=0 in the middle.
2) In the comments you say that you're plotting the magnitude but that's clearly not the case (the magnitude should be greater than 0 and symmetric). Instead you're probably just plotting the really part. Instead, take the magnitude, or Re*Re + Im*Im, where Re and Im are the real and imaginary parts respectively. (Depending on the form of your numbers, something like Math.sqrt(Math.pow(a.re, 2) + Math.pow(a.im, 2)).)

Related

Android: Achieving smooth roll of ball using accelerometer

I have developed a maze game for Android where you control the ball by tilting the phone.
So I use the accelerometer and integrate the x and y accelerometer values and then move the ball a step in that direction.
I have a problem though, I cannot achieve a very smooth roll. When the ball picks up speed it is to obvious that it jumps in big discrete steps. I have seen other apps like this where the ball rolls fast but smoothly.
So I might have to change my strategy, use some sort of time solution instead. Now the faster the speed the bigger the step I move. Instead maybe I should have a timer that moves the ball 1 pixel every ms if speed is high or only every 10th ms if the speed is low or something along those lines.
Or how do people achieve a smoother roll?
Also: Would you use OpenGL for this?
What you're really doing here is integrating coupled differential equations. Don't worry if you haven't taken enough calculus or physics to know what that means.
People who integrate coupled differential equations for a living have evolved many algorithms to do so efficiently.
You really have four equations here: acceleration in x- and y-directions and velocity in x- and y-directions:
dvx/dt = ax
dvy/dt = ax
dsx/dt = vx
dxy/dt = vy
(sx, sy) give the position of the ball at a given time. You'll need initial conditions for (sx, sy) and (vx, vy).
It sounds like you've chosen the simplest way to integration ODEs: Euler explicit integration. You calculate the values at the end of a step from the values at the beginning plus the rate of change times the time step:
(vx, vy)_1 = (vx, vy)_0 + (ax, ay)_0 * dt
(sx, sy)_1 = (sx, sy)_0 + (vx, vy)_0 * dt
It's easy to program, but it tends to suffer from stability problems under certain conditions if your time step is too large.
You can shrink your time step, which will force you to perform the calculations many more times, or switch to another integration scheme. Search for implicit integration, Runge-Kutta, etc.
Integration and rendering are separate problems.

Android AudioRecord Short Values - Can I consider the absolute value of negative amplitudes in a Sound Wave?

I am currently having trouble understanding the significance of negative amplitudes in a traditional Sound Wave, such as in the Short values of Android's Audio Record.
1. Is the amplitude still the distance between zero and the value of the node (absolute value), or the distance from the previous node to the current?
Basically, I am looking into Ludvigsen's Sound Classification Technique (1993), but the demonstrations I have looked in show only positive values.
2. Some Sound Waves have negative values after a previous negative value (or vice versa) rather than bouncing below or above zero after each value. Such as center of image at: http://puu.sh/a0dhg/62b2a5c6da.png (I cannot post images directly yet due to missing reputation).
Therefore my remaining question is: When does a Sound Wave "decide" to go above or below zero? Since I was of the idea the the below-zero is sort of a retractions of a previous above-zero value (compression being pushed and bounces back), but moving in the same direction in relevance to zero seems somewhat illogical.
That's pretty much it, thanks in advance. Your help will be most appreciated.
The amplitude of a wave is a measure for its strength (loudness in case of sound). It basically tells you how far the wave swings away from the neutral position. Several definitions do exist, see for example Wikipedia.
The frequency of the wave is a measure of how often in a second it swings a full period (zero - max value - zero - min value - zero).
Any sound can be thought of a composition of several pure sine (and cosine) waves of different frequencies and amplitudes.

Android Accelerometer Profiling

I have written a simple Activity which is a SensorEventListener for Sensor.TYPE_ACCELEROMETER.
In my onSensorChanged(SensorEvent event) i just pick the values in X,Y,Z format and write them on to a file.
Added to this X,Y,Z is a label, the label is specific to the activity i am performing.
so its X,Y,Z,label
Like this i obtain my activity profile. Would like to have suggestions on what operations to perform after data collection so as to remove noise and get the best data for an activity.
The main intent of this data collection is to construct a user activity detection application using neural network library (NeuroPh for Android) Link.
Just for fun I wrote a pedometer a few weeks ago, and it would have been able to detect the three activities that you mentioned. I'd make the following observations:In addition to Sensor.TYPE_ACCELEROMETER, Android also has Sensor.TYPE_GRAVITY and Sensor.TYPE_LINEAR_ACCELERATION. If you log the values of all three, then you notice that the values of TYPE_ACCELEROMETER are always equal to the sum of the values of TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION. The onSensorChanged(…) method first gives you TYPE_ACCELEROMETER, followed by TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION which are the results of its internal methodology of splitting the accelerometer readings into gravity and the acceleration that's not due to gravity. Given that you're interested in the acceleration due to activities, rather than the acceleration due to gravity, you may find TYPE_LINEAR_ACCELERATION is better for what you need.Whatever sensors you use, the X, Y, Z that you're measuring will depend on the orientation of the device. However, for detecting the activities that you mention, the result can't depend on e.g. whether the user is holding the device in a portrait or landscape position, or whether the device is flat or vertical, so the individual values of X, Y and Z won't be any use. Instead you'll have to look at the length of the vector, i.e. sqrt(XX+YY+ZZ) which is independent of the device orientation.You only need to smooth the data if you're feeding it into something which is sensitive to noise. Instead, I'd say that the data is the data, and you'll get the best results if you use mechanisms which aren't sensitive to noise and hence don't need the data to be smoothed. By definition, smoothing is discarding data. You want to design an algorithm that takes noisy data in at one end and outputs the current activity at the other end, so don't prejudge whether it's necessary to include smoothing as part of that algorithmHere is a graph of sqrt(XX+YY+ZZ) from Sensor.TYPE_ ACCELEROMETER which I recorded when I was building my pedometer. The graphs shows the readings measured when I walked for 100 steps. The green line is sqrt(XX+YY+Z*Z), the blue line is an exponentially weighted moving average of the green line which gives me the average level of the green line, and the red line shows my algorithm counting steps. I was able to count the steps just by looking for the maximum and minimums and when the green line crosses the blue line. I didn't use any smoothing or Fast Fourier Transforms. In my experience, for this sort of thing the simplest algorithms often work best, because although complex ones might work in some situations it's harder to predict how they'll behave in all situations. And robustness is a vital characteristic of any algorithm :-).
This sounds like an interesting problem!
Have you plotted your data against time to get a feel for it, to see what kind of noise you are dealing with, and to help decide how you might pre-process your data for input to the detector?
^
|
A |
|
|
|
|_________________>
| time
|
v
I'd start with lines for each activity:
|Ax + Ay + Az|
|Vx + Vy + Vz| (approximate by calculating area of trapezoids formed by your data points)
... etc
Maybe you can work out the orientation of the phone by attempting to detect gravity, then rotate your vectors to a 'standard' orientation (eg positive Z axis = up). If you can do that, then the different axes may become more meaningful. For example, walking (in pocket) would tend to have a velocity on the horizontal plane, which might be distinguished from walking (in hand) by motion in the vertical plane.
As for filters, if the data appears noisy, a simple starting point is to apply a moving average to smooth it. This is a common technique for sensor data in general:
https://en.wikipedia.org/wiki/Moving_average
Also, this post seems relevant to your question:
How to remove Gravity factor from Accelerometer readings in Android 3-axis accelerometer
Things Identified by me:
The data has to be preprocessed as and how you need it to be, In my
case i just want 3 inputs and one output
The data has to be
subjected to Smoothing (Five-point Smoothing or Any other technique
which suites you the best) Reference. So that Noise gets filtered out (not completely though). Moving Average is one of the techniques
Linearized data would be good, because you dont have any idea how the data was sampled, Use interpolation to help you in Linearizing the data
Finally use FFT (Fast Fourier Transform) to extract the recipe out of the dish, that is to extract features out of your dataset!

Motion detection using OpenCV

I see queries related to opencv motion detection, but my requirement is much simpler , so i am asking the question again .
I would like to analyse video frames and see if something has changed in the frame. Any kind of motion occurring in the frame has be recognized. I just want to get notified if something happens. I don't need to track/ draw contours.
Attempts made :
1) Template matching using OpenCV ( TM_CCORR_NORMED ).
I get the similarity index using cvMinMaxLoc &
if( sim_index > threshold )
"Nothing chnged"
else
"Changed
Problem faced :
I couldn't find a way to decide on how to set thresholds. The values of false match and perfect were very close.
2) Method 2
a) Make running average
b) Take abs difference between current frame and moving average.
c) Threshold it and made it binary
d) Count the number of non zero values
Again am stuck with how to threshold it, because i am getting a large number of non zero values even for very similar frames.
Please advice me on what approach i should take. Am i going in the right direction with the above two methods, or is there a simple method which can work in all most generic scenarios.
Method 2 is generally regarded as the most simple method for motion detection, and is very effective if you have no water, swaying trees or highly variable lighting conditions in your video.
Normally you implement it like this:
motion_frame=abs(newframe-running_avg);
running_avg=(1-alpha)*running_avg+alpha*newframe;
You can threshold the motion_frame if you want, then count the nonzeroes. But you could also just sum the elements of the motion_frame and threshold that instead (be sure to work with floating point numbers). Optimizing the parameters for this is pretty easy, just make two trackbars and play around with it. Typically alpha is around [0.1; 0.3].
Lastly, it is probably overkill to do this on entire frames, you could just use subsampled versions and the result will be very similar.

FFT on EEG signal in Android understanding the code

I've been attempting to find a library that would enable to perform FFT (Fast Fourier Transform) on some EEG signals in Android.
with help of Geobits, I've finally found the code that might help me do FFT on an EEG signal. But I am having a hard time figuring out how does the code actually work. I want to know what float array x and y are for and maybe an example that might help me a little more.
An fft should return a series of complex numbers (could either be rectangular coordinates, or polar: phase and magnitude) for a specific range of frequencies...
I'm still working through the expressions, but I'll bet dollars to donuts that the x and y arrays are the real (x) and imaginary (y) components of the complex numbers that are the result of the transformation.
The absolute value of the sum of the squares of these two components should be the magnitude of the harmonic component at each frequency (conversion to polar).
If the phase is important for your application, keep in mind that the the FFT (as with any phasor) can either be sine referenced or cosine referenced. I beleive sine is the standard, however.
see:
http://www.mathworks.com/help/matlab/ref/fft.html
http://mathworld.wolfram.com/FastFourierTransform.html
Since the FFT gives a truncated approximation to an infinite series created by a harmonic decomposition of a periodic waveform any periodic waveform can be used to test the functionality of your code.
For an example, a square wave should be easy to replicate, and has very well known harmonic coefficients. The resolution of the data set will determine the number of harmonics that you can calculate (most fft algorithms do best with a data set that has a length equal to a power of two, and is a number of integral wavelengths of the longest frequency that you want to use).
The square wave coefficients should be at odd multiples of the fundamental frequency and have magnitudes that vary inversely with the order of the harmonic.
http://en.wikipedia.org/wiki/Square_wave

Categories

Resources