Extracting Raw Voice Data from AndroidSmart Phone Microphone - android

I need to know how to extract raw voice data from android real time. The data should be in time domain.

You can try http://eagle.phys.utk.edu/guidry/android/index.html
or
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html

Related

Is there a way to record audio and start listening on the click, store on our local machine and convert that from speech to text?

Is there a way to record audio and start listening on the click, stop automatically when the user stops speaking , store that recorded audio in our local machine and convert that speech to text and send the text string to the LUIS(Microsoft) API using React Native(0.60) ?

How to get remote audio stream in WebRTC on Android

How can I get and convert the remote Audio stream data on Android? Preferably in the form of byte array buffers. I would then upload the data to some third party service to process.
I've seen something similar done on Web here and here with the help of MediaStream Recording API, where they used MediaRecorder as the media to get the buffered data from MediaStream into an array. How can I do something similar on Android?
I know that it's possible to obtain the local audio data by setting a listener that implements SamplesReadyCallback in the setSamplesReadyCallback() method when creating the local AudioDeviceModule used for capturing local audio. This method doesn't really involve WebRTC's PeerConnection, it's essentially just setting a callback listener on the local audio data. Worst case scenario, I can just retrieve the local audio data by this method and send them off to the remote side via DataChannel, but I would still like a more ideal solution that can avoid sending essentially the same data twice.

Play audio from array of bytes in android?

I want to stream live audio from one device to many devices . I am recording my voice in android and while its recording i am sending bytes to server and again receiving those bytes on different devices what i am getting is array of bytes and i am getting so many array of bytes every second . Now want to play those bytes as audio . media player require file to play but i cant save it into file because data is still coming i am very confused either i am doing it in wrong way . Actua i want to made two apps in one app we speak something and in another app we can listen what is someone speaking at that side in real time .
The AudioTrack class allows streaming of PCM audio buffers, via write (byte[] audioData, int offsetInBytes, int sizeInBytes) (among other methods).

Raw audio packets to WAV/GSM_MS compliant file on Android

I'm looking for the logic/code-snippet which can convert my raw audio packets to WAV/GSM_MS complaint audio file. I'm able to capture data from android device mic and store it in buffer or file.
Assuming your raw data is already in interleaved, All you need is to prepend wave header in the beginning. The wave header format is given here https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
When you create a new wave file always write the header (with data length field set to zero as you dont know the entire size of data you wish to write at the at beginning of recording) then start writing your data immediately after the header, once you are done writing the data to it seek to the beginning and update the data length field.
here http://www.codeproject.com/Articles/129173/Writing-a-Proper-Wave-File is a code for the same.

CallBack for the recorded block in MediaRecorder

I am trying to record a voice from Mic using Media Recorder class. in the mentioned class we have just setOutputFile method to set the output file, but I need to get a buffer of some certain recorded voice, I mean i need something like a CallBack method that return a block of recorded byte at that time and i am going to send the mentioned bytes to another device...
Actually I want to stream and send the recorded voice through socket to another device simultaneously not saving the recorded voice and then read the file and send it, due to it results an unexpected delay...
Alireza,
This can be done pretty easily. All you have to do is set up a socket, from that socket you create a ParcelFileDescriptor, then set this file descriptor in setOutputFile. This will set up the streaming part, but then you will have some formatting issues with the file afterwards. This is because MediaRecorder reserves the header space of the file, but only writes it after the stream has finished. In order to have a functional file on the server-side, you will have to parse the header, and write it to the beginning of the file (or buffer).
Good luck,
B-Rad

Categories

Resources