recording sound recognize android - android

I develop an app where the user can record an sound, maybe a clap or other sounds. The android system should save it, since the user delete the command. With that recording sound the user can select any commands that the device maybe show an intent or others. Now my question: Is it possible that when the microphone are on that the device when it hear the recording voice start the intent and when it is possible how can I do that? I think that I maybe do it with getMaxAmplitude() but I need an method to decide the length of that amplitude.

https://code.google.com/p/musicg/ may help you.
You can find the demo app in the site.
The features described in the site as follows:
Clap Api - Detect whether the input audio is a clap
Whistle Api - Detect whether the input audio is a whistle
Read PCM WAVE Headers
Read audio data
Trim the audio data
Save the edited audio file
Read amplitude-time domain data
Read frequency-time domain data
Render audio wave form image (Requires Java 2D & Java Image I/O, Android non-compatible)
Render audio spectrogram image (Requires Java 2D & Java Image I/O, Android non-compatible)

Related

What is the amplification factor required for android's media recorder to match the output of iOS's AVAudioRecorder?

I have a cross-platform(iOS and Android) app where I will record audio clips then send it to the server to do some machine learning operations. In my iOS app, I use AVAudioRecorder for recording the audio. In the Android app, I use MediaRecorder for recording the audio. In the mobile initially, I use m4a format because of size constrictions. After reaching the server I will convert it to wav format before using it in the ML operations.
My Problem is, in iOS the AVAudioRecorder by OS default does a factor of Amplification to the raw audio data before we the developer get access to the raw data. But in Android, the MediaRecorder doesn't provide any sort of default Amplification to the raw data. In other words, in iOS I will never get the raw audio stream from the microphone whereas in Android I will always only get the raw audio stream from the microphone. The distinction is clearly visible if you can record the same audio in both iPhone and Android phones side by side with a common audio source, then import the recorded audio in Audacity for visual representation. I have attached a sample representation screenshot below.
In the image, the first track is the Android recording and the second track is from the iOS recording. When I hear both the audio through headphones I can vaguely distinguish them but when I visualize the data points, you can clearly see the difference in the image. These distinctions are bad for ML operations.
Clearly in the iPhone, there is a certain amplification factor involved which I would like to implement in the Android also.
Is anyone aware of the amplification factor? OR are there any other possible alternatives?
It's quite possible that the difference is that the effect of Automatic Gain Control.
You can disable this in your app's AVAudioSession by setting its mode to AVAudioSessionModeMeasurement which you do once in your application - usually at startup. This disables a great deal of input signal processing.
Reading your problem description, you might be better off enabling AGC on Android.
If neither of these yields results, you might want to gain scale both signals so they are just below clipping.
let audioSession = AVAudioSession.sharedInstance()
audio.session.setMode(AVAudioSessionModeMeasurement)

How to extract a section of audio using ExoPlayer?

I would like to extract a selected interval of an audio track (e.g. 10s A-B interval) and then export it in a consistent format (regardless of the source file format).
I started by trying to load local file as a stream and then the idea was to save audio buffer and use MediaRecorder to encode it. But after a lot of struggles and virtually no progress I hope someone can at least point me into the right direction.
Note: Workarounds where the user has to play the segment while the app records it via MIC will not be accepted.

Sound recognition for Phonegap

I am creating an Android application using Phonegap. I would like to record a sound (for example, a doorbell), and detect if that sound is heard again. Is there any sound recognition plugin for Phonegap?
If not, how can I access the frequencies of the recorded sound (for example, in an array), so I can manually write an algorithm to compare the two audio files? Is there any audio format which stores data in that form?
Thank you

performing voice processing in real time on Android phones

I am new to Android and Java and I need to know if this would be possible
I want to capture the sound input to the phone's microphone, perform some computations on this signal and output the modified signal to the earphones
Is processing the input to the microphone in real time like this possible?
THE ANDROID DEVELOPERS WEBSITE says that
Note: The Android Emulator does not have the ability to capture audio,
but actual devices are likely to provide these capabilities.
What does it mean by likely? Is it possible that some phones do not even allow reading from the microphone at all?
It is possible.
When you record the audio you can buffer it do some processing and then output it or save it or whatever....
There is an app for example that does the following thing when someone calls you:
It combines your voice with some sound you recorded and plays it in sync to the caller, making him think your are on a party, a bus etc....this is an example of processing the sound recorded.
Edit 1: Here is a similar topic that should guide you further how to implement this. Real-time audio processing in Android

Android audio and voice processing

I am new to android and presently doing android voice recording application. I want top know which format is best for saving audio file in android. (i.e RAW-AMR or 3gp or mp4).So rhat we can hear playback sound loudly in device.
Is there any alternative way to increase audio sound through voice processing in android.
Thanks in advance.
Question: Which bear is best? Answer: Black Bear
Seriously though, you would need to state your criteria for the audio file for us to make a codec recommendation. Does it need to be portable? Best compression? Highest fidelity?
The codec that you choose has no affect on the loudness of audio that will be played over the device, so this should not factor into your criteria.
Is there an alternative way to increase audio?
Yes, if you are recording audio from the microphone then you can amplify the audio data before you save it to a file.
Let an audio sample from the microphone be represented by the function:
f(t)
Amplification is achieved by multiplying the audio sample by some factor A
A * f(t)
You can use AGC(Automatic Gain Control) module from WebRTC to increase sound level.
I didn't find any simple Java API yet. You can use C++ API via JNI.
Have a look here, WebRTC AGC (Automatic Gain Control) .
I want top know which format is best for saving audio file in android.
To save voice audio on Android (or any other platform), take a look at Opus. It's a free, state-of-the-art audio codec that also supports voice mode.

Categories

Resources