Does Android have a similar method to the iPhone AVAudioPlayer's averagePowerForChannel?
I want to get an average reading of the amplitude of as a value.
I don't think there is a built-in function but you can calculate it yourself.
To do this, calculate the root-mean-square average of continuous samples.
rms = sqrt((sample0^2 + sample1^2 + sample2^2 + sample3^2) / numberOfSamples)
The following links should be helpful to you as they contain the full source code to two excellent Android sound related projects.
Ringdroid
Rehearsal Assistant
Related
Context
I need to get the current speed of the aircraft.
Efforts
Looking at the documentation, I can see that exists the method WaypointMissionOperator.getAutoFlightSpeed which gives me that info.
Problem
While implementing the code, this method was missing. Was it moved, renamed or something?
Environment
OS: Android
DJI SDK version: 4.13.1
Have a look at the FlightControllerState class, it contains the current status of pretty much all of the aircraft components.
Specifically, you should look at FlightController.setStateCallback(). This returns current information, specifically you want to use getVelocityX, getVelocityY and getVelocityZ.
There are also keys for each, the documentation gives the details.
Having X, Y and Z velocities, the speed can be computed in this way:
final speed = sqrt(pow(velocity.x, 2) + pow(velocity.y, 2) + pow(velocity.z, 2));
After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.
I plan to "visualise" some graphical data via audio. To make it short: I get a bunch of frequencies and related amplitude values out of some image data. This frequency/amplitude table with - lets say 256 pairs of data - has to be converted into related sine-waveforms.
One solution would be to generate sine-waveforms with different frequencies for eeach table entry. That would mean to generate sine waveforms for up to 256 times. But I'd guess that's quite slow. So using FFT-conversion should be a better solution for this?
So my question: is there some kind of fast and easy to use FFT standard available for Android that could be used for this?
In my Android project I used JTranforms which worked flawlessly on Android.
Example code:
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
AudioRecord ar = // initialize AudioRecord here;
ar.startRecording();
// Here's the Fast Fourier Transform from JTransforms
DoubleFFT_1D fft = new DoubleFFT_1D(samples.length);
do {
// Read audio to 'samples' array and convert it to double[]
ar.read(samples, 0, samples.length);
// Will store FFT in 'samplesD'
fft.realForward(samplesD);
} while ( /* condition */ );
ar.stop();
ar.release();
UPDATE:
It can be found at JTransforms that is maintained on github here and avaiable as a Maven plugin here.
To use with with recent Gradle versions, do something like:
dependencies {
...
implementation 'com.github.wendykierp:JTransforms:3.1'
}
I want to develop app to calculate Sound frequency in Android. Android Device will take
Sound from microphone (i.e. out side sound) and I have one color background screen in app.
on sound frequency changes i have to change background color of screen .
So my question is "How can i get sound frequency"?
is there any android API available?
Please help me out of this problem.
Your problem was solved here EDIT: archived here. Also you can analyze the frequency by using FFT.
EDIT: FFTBasedSpectrumAnalyzer (example code, the link from the comment)
Thanks for Reply I have done this by using sample on
http://som-itsolutions.blogspot.in/2012/01/fft-based-simple-spectrum-analyzer.html
Just modify your code for to calculate sound frequency by using below method
// sampleRate = 44100
public static int calculate(int sampleRate, short [] audioData){
int numSamples = audioData.length;
int numCrossing = 0;
for (int p = 0; p < numSamples-1; p++)
{
if ((audioData[p] > 0 && audioData[p + 1] <= 0) ||
(audioData[p] < 0 && audioData[p + 1] >= 0))
{
numCrossing++;
}
}
float numSecondsRecorded = (float)numSamples/(float)sampleRate;
float numCycles = numCrossing/2;
float frequency = numCycles/numSecondsRecorded;
return (int)frequency;
}
The other answers show how to display a spectrogram. I think the question is how to detect a change in fundamental frequency. This is asked so often on Stack Exchange I wrote a blog entry (with code!) about it:
http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html
Admittedly, the code is in C, but I think you'll find it easy to port.
In short, you must:
low-pass the input signal so that higher frequency overtones are not mistaken for the fundamental frequency (this may not appear to be an issue in your application, since you are just looking for a change in pitch, but I recommend doing it anyway for reasons that are too complex to go into here).
window the signal, using a proper windowing function. To get the most responsive output, you should overlap the windows, which I don't do in my sample code.
Perform an FFT on the data in each window, and calculate the frequency using the index of maximum absolute peak value.
Keep in mind for your application where you probably want to detect the change in pitch accurately and quickly, the FFT method I describe may not be sufficient. You have two options:
There are techniques for increasing the specificity of the pitch tracking using phase information, not just the absolute peak.
Use a time-domain method based on autocorrelation. Yin is an excellent choice. (google for "yin pitch tracking")
Here is a link to the code mentioned. There's also some other useful code there.
https://github.com/gast-lib/gast-lib/blob/master/library/src/root/gast/audio/processing/ZeroCrossing.java
Here's the deal with ZeroCrossings:
It is inaccurate at determining frequency precisely based on recorded audio on an Android. That said, it is still useful for giving your app a general sense that the sound it is hearing is a constant singing tone, versus just noise.
The code here seems to work quite well for determining frequency, (if you can translate it from C# to java)
http://code.google.com/p/yaalp/
I'm trying to get a the Euler angle of a Face that is detected by FaceDetector.
Here is what I use to output to Logcat:
Log.v("debug", " X: " + face.pose(Face.EULER_X) + " Y: " + face.pose(Face.EULER_Y) + " Z: " + face.pose(Face.EULER_Z) );
But it always returns 0.0 for all three, no matter what angle the face is at. Any ideas why?
Yeah the FaceDetector from API 1 never returns a pose angle. You can look at the source code to verify.
The newer FaceDetectionListener from API 14 will return a pose angle, but it's only available on a limited number of devices right now. Not even all devices running API 14 can use it. You have to call getMaxNumDetectedFaces() to see if your device supports that API.
You can alternately try using OpenCV. A couple options for that are http://code.opencv.org/projects/opencv/wiki/OpenCV4Android and http://code.google.com/p/javacv/. In my experience they aren't worth the hassle unless you really, really need the pose angle.
There are a few similar questions here. Check out the first answer from this link:
Android Facedetector pose values are always 0
And see below for someone that says they solved the problem:
Android Face Detection
Set the detector setMode to ACCURATE_MODE
Here is an example that worked for me in Kotlin:
val detector = FaceDetector.Builder(context)
.setClassificationType(FaceDetector.ACCURATE_MODE)
.setMode(FaceDetector.ACCURATE_MODE)
.setTrackingEnabled(true)
.build()
On the latest version developers needs to set the perfomance mode to PERFORMANCE_MODE_ACCURATE not PERFORMANCE_MODE_FAST
setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)