I know Visualizer to show some wave while playing audio using android Media Player.
But i want to show Visualizer while recording audio means while recording i want to show linear wave which changes based on user voice beat.
Is it possible to do in android.
by calling every x milliseconds your MediaRecorder.getMaxAmplitude(), you gonna have (from official documentation) :
the maximum absolute amplitude that was sampled since the last call to
this method.
then, you can process this value in real time to draw a graph or change some view properties.
not perfect, but I hope it helps =)
edit: just so you know, the retrieved value will be the same across all android devices : between 0 and 32767. (I have more than 10k user's reports giving me this value when they blow in the mic).
You may need to use AudioRecorder class instead of MediaRecorder.
Check AudioRecorder#read(...) methods which put audio data to byte[] instead of putting it directly to a file.
To show changes on the graph you will have to analyze the data (which is encoded in PCM 8 or 16 bit - link) and update the graph in real time.
Two important things:
you need to convert live bytes (from mic) to numeric values inorder to plot them.
Since you use a real-time graph, to plot those points
use SurfaceView.
Convert recording bytes to numeric values refer:
Android: Listener to record sound if any sound occurs where you will see the variable "temp" holds the numerical value of your audio.
Plot points
These numeric values which indicates your Y values is plotted against increasing X (time interval) values (0,1,2..) as graph. Using SurfaceView
eg..,
//canvas.drawLine(previous X value,previous Y value,X,Y, paint);
canvas.drawPoint(X,Y,paint);
SurfaceHolder.unlockCanvasAndPost(canvas);
You need not plot all values, for efficiency you can filter those values with your conditions and plot for certain intervals of time.
Hope this helps :)
Related
I need to output the audio from the left and right channels to the headphone jack, and the headphone jack to an oscilloscope. I can't get the correct audio waveform with Float.MAX_VALUE and Float.MIN_VALUE. Usually the 16-bit audio max/min is a short type with a value of +/-32767. So you can assign values with Short.MAX_VALUE and Short_MIN_VALUE. But currently my audio is of type float, ie AudioFormat.ENCODING_PCM_FLOAT, and using Float.MAX_VALUE and Float.MIN_VALUE does not get the correct audio waveform in the oscilloscope. The actual audio waveform will have 0.4 milliseconds of noise before and after, but when I take the float at 3.5f or -3.5f, the shape of the waveform looks correct, but it doesn't reach the maximum. So what is the maximum and minimum audio value of the float type?
The actual audio waveform will have 0.4 milliseconds of noise before and after.
The correct waveform should be such a shape. If set to 3.5f/-3.5f, the shape is correct but not the maximum.
From the docs:
...The implementation does not clip for sample values within the nominal range [-1.0f, 1.0f], provided that all gains in the audio pipeline are less than or equal to unity (1.0f), and in the absence of post-processing effects that could add energy, such as reverb. For the convenience of applications that compute samples using filters with non-unity gain, sample values +3 dB beyond the nominal range are permitted. However such values may eventually be limited or clipped, depending on various gains and later processing in the audio path. Therefore applications are encouraged to provide samples values within the nominal range.
A 3dB power increase corresponds to an increase in voltage of sqrt(2), or roughly 1.41. So, according to the documentation, your device may be able to handle -1.41 to 1.41, but note the caveat about clipping.
I am working on an Android Project in which ,
Let's Say I have an audio file in .wav format
I want to create an array in which i want to set three fields :
1. Time (in Seconds or in Mili Seconds) of audio file.
2. Amplitude at that particular Second of audio file.
3. Frequency at that particular Second of audio file.
I am thinking that i can use "for OR while" loop to create the array by getting amplitude and frequency at each second and save add it to my array list.
But I don't know how to get the value of amplitude and freuency at that particular Second from my audio file.
I never worked with audios before, So I don't know , Is there any method to get these values.
I am thinking that as amplitude can be represented on graphs of amplitude and time, amplitude can be an integer or long ( I actually Don't Know ).
Is it possible ? If Yes than How can I achieve this.
Thank You In Advance :-)
I´m trying to develop an app that calculates the reverberation time of a room and displays it on the screen.
What I´ve done so far is:
record the audio in a wav file
extract the bytes from the wav file and transform them to "double"
plot the data obtained following the next equation: SPL=20log(samples/ 20 μPa)
then from the figure that I´ve plotted, I can obtain the RT60 easily
The point is that I´m not really sure if what I´m doing has any sense, as wherever I search for info I see that they obtain the RT by octave ( or third of octave ) bands and in my case I´m not doing anything with the frequency, I´m just plotting the graph against time getting something like this:
So my point is, is there anything that I´m missing?
Should the "samples" value in the SPL formula be something else? What Im doing to obtain them is:
double audioSample = (double) (array[i+1] << 8 | array[i] & 0xff)/ 32767.0;
and then I place the [-1,+1] values that I obtain directly in the formula
For what frequency I´m I theorically plotting the RT?
Thanks
You have to use a common frequency for the voice you can use 500hz
, 1000 or 2000. Or an average of the 3 like for the calculus off rt60. Oliver.
I would like to know how I can synchronize an mp3 file, a text box and images (mp3, text and images must be received via http) that is will be displayed on a video player which must receive and show the images every 10 seconds, and likewise have a text box with the synchronized audio and therefore with the images (showing the text of the mp3), I have researched, but i not know where to start, how to manage the intervals and the data, with methods as "timer" in c #, really would appreciate your help.
I am assuming the text and the images have TimeStamps associated with them. (TimeStamps - Time at which they should be displayed relative to the audio), if so then you can do this
Play the audio file using the MediaPlayer class
Use the method getCurrentPosition() to get the time value
Based on the time above check if you need to update the image or the text or both, if you need to then update TextView and ImageView
Check out the CountDownTimer class provided in the android sdk to manage the intervals: http://developer.android.com/reference/android/os/CountDownTimer.html
As for getting images via http here is a simple java tutorial on sockets: http://docs.oracle.com/javase/tutorial/networking/sockets/
I resolved that with:
if(mediaPlayer.isPlaying() && abc >= 313800 && abc < 379340){
currentimageindex=6;
subtitles.setText(this.getString(R.string.t7));
I hope to help others.
I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic.
So thats working nicely. Now i need convert the pcm bytes into decibel.
I have a formula that takes sound presure in Pa into db.
db = 20 * log10(Pa/ref Pa)
So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what.
I tried to putting the value into te formula but it comes back with very hight db so i do not think its right
thanks
Disclaimer: I know little about Android.
Your device is probably recording in mono at 44,100 samples per second (maybe less) using two bytes per sample. So your first step is to combine pairs of bytes in your original data into two-byte integers (I don't know how this is done in Android).
You can then compute the decibel value (relative to the peak) of each sample by first taking the normalized absolute value of the sample and passing it to your Db function:
float Db = 20 * log10(ABS(sampleVal) / 32768)
A value near the peak (e.g. +32767 or -32768) will have a Db value near 0. A value of 3277 (0.1) will have a Db value of -20; a value of 327 (.01) will have a Db value of -40 etc.
The problem is likely the definition of the "reference" sound pressure at the mic. I have no idea what it would be or if it's available.
The only audio application I've ever used, defined 0db as "full volume", when the samples were at + or - max value (in unsigned 16 bits, that'd be 0 and 65535). To get this into db I'd probably do something like this:
// assume input_sample is in the range 0 to 65535
sample = (input_sample * 10.0) - 327675.0
db = log10(sample / 327675.0)
I don't know if that's right, but it feels right to the mathematically challenged me. As the input_sample approaches the "middle", it'll look more and more like negative infinity.
Now that I think about it, though, if you want a SPL or something that might require different trickery like doing RMS evaluation between the zero crossings, again something that I could only guess at because I have no idea how it really works.
The reference pressure in Leq (sound pressure level) calculations is 20 micro-Pascal (rms).
To measure absolute Leq levels, you need to calibrate your microphone using a calibrator. Most calibrators fit 1/2" or 1/4" microphone capsules, so I have my doubts about calibrating the microphone on an Android phone. Alternatively you may be able to use the microphone sensitivity (Pa/mV) and then calibrate the voltage level going into the ADC. Even less reliable results could be had from comparing the Android values with the measured sound level of a diffuse stationary sound field using a sound level meter.
Note that in Leq calculations you normally use the RMS values. A single sample's value doesn't mean much.
I held my sound level meter right next to the mic on my google ion and went 'Woooooo!' and noted that clipping occurred about 105 db spl. Hope this helps.
The units are whatever units are used for the reference reading. In the formula, the reading is divided by the reference reading, so the units cancel out and no longer matter.
In other words, decibels is a way of comparing two things, it is not an absolute measurement. When you see it used as if it is absolute, then the comparison is with the quietest sound the average human can hear.
In our case, it is a comparison to the highest reading the device handles (thus, every other reading is negative, or less than the maximum).