How to capture output stream of audio in Android? - android

I am a newbie in development and I trying to create an equalizer on Android platform.
How I can capture output audio stream on android? I just need to take audio information that goes out from my application.
(I already searched www.developers.android.com and i have not found any information)

There's currently no functionality in Android for recording audio output (well, there's the Visualizer API that let's you grab partial, low-quality audio for audio visualization purposes).
If you only need to apply the effect to the audio from your own app then you could do the "recording" internally. I.e., in your app, send the decoded audio data to your effect to be processed, and then send the processed data to your AudioTrack or OpenSL ES buffer queue.

Related

How to record microphone to more compressed format during WebRTC call on Android?

I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234

Android: Record raw audio and record video at the same time

I develop an Android app based on sound and video records. I would like to get a real-time playback of the mic audio in the headphones while previewing AND capturing the video and sound.
What i have now, working fine alone:
1) use Superpowered library to record audio and playing it back in real-time (during preview and record). Behind the scene, it does directly with C++ the work of AudioRecord by pushing the buffer to the output (headphones). The goal is to apply audio effects on the raw sound in real-time.
2) capture the video with mediaRecorder
When audio playback is running, I try to launch the video record, it crashes starting :
E/MediaRecorder: start failed: -2147483648
I imagine that i can't launch two recording process at the same time. I think using the AudioRecord or Superpowered lib is the good way to process the raw audio, but I can't figure out how to record video without conflicting with the current audio recording.
So is there a way to achieve my feature?
(minSdk 16)
According bigflake
The MediaCodec class first became available in Android 4.1 (API 16). It was added to allow direct access to the media codecs on the device.
In Android 4.3 (API 18), MediaCodec was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
So if it's possible please think about increasing MinSDK to 18 and use AudioVideoRecordingSample or HWEncoderExperiments as examples.

Video call recording in Twilio Android SDK

I know Twilio doesn't support video call recording on server but I've been trying to figure out how to do it locally on the android end. I have studied the video-quickstart-android code in my try to figure out how i can extract the video stream from the LocalVideoTrack and VideoTrack classes of the Twilio android conversations API but couldn't find any such method from where i could extract the underlying Video Stream and record it locally on the android device.
Anyone have any idea how I can get video stream for recording the video locally on the android device from Twilio conversations api for android?
You would have to write a custom video renderer that takes each frame and converts them into your preferred media format.
As an example the VideoViewRenderer takes frames and passes them to the org.webrtc.SurfaceViewRenderer, rendering them to a View. In this case you would write another renderer, perhaps named VideoRecorderRenderer that implemented the VideoRenderer interface and did the work of taking each I420Frame and converting to a media type. You could then add the VideoRecorderRenderer to the VideoTrack. However, this alone may not be the solution you are looking for since this is only the video portion of the media, and does not provide the audio. The AudioTrack does not expose an interface to capture the audio output at the moment.

recording sound recognize android

I develop an app where the user can record an sound, maybe a clap or other sounds. The android system should save it, since the user delete the command. With that recording sound the user can select any commands that the device maybe show an intent or others. Now my question: Is it possible that when the microphone are on that the device when it hear the recording voice start the intent and when it is possible how can I do that? I think that I maybe do it with getMaxAmplitude() but I need an method to decide the length of that amplitude.
https://code.google.com/p/musicg/ may help you.
You can find the demo app in the site.
The features described in the site as follows:
Clap Api - Detect whether the input audio is a clap
Whistle Api - Detect whether the input audio is a whistle
Read PCM WAVE Headers
Read audio data
Trim the audio data
Save the edited audio file
Read amplitude-time domain data
Read frequency-time domain data
Render audio wave form image (Requires Java 2D & Java Image I/O, Android non-compatible)
Render audio spectrogram image (Requires Java 2D & Java Image I/O, Android non-compatible)

Android custom audio player to replace MediaPlayer with libPd or OpenSL ES or AudioTrack

I have already developed Streaming Audio application using MediaPlayer API. All the features are works fine, expect it takes more time to start the playback (buffer time is more).
i want to add recording live audio stream(save the live stream data in disk, not the recording from MIC). as MediaPlayer does not provide any API to access the raw data stream, i am planning to build custom audio player.
i want to control the buffering time, access to the raw audio stream, should able to play all the audio format which are supported in android natively. which api (libPd or OpenSL ES or AudioTrack) will be suitable to build the custom audio player in Android?
In my experience OpenSL_ES would be the choice, here there is a Link that explains how to do audio streaming that you may find useful. bufferframes determines how many samples you will collect before playing, so smaller bufferframes faster response time, but you have to balance that with your device processing capabilities.
You can record with libpd (pd-for-android) too.
All the recording process is managed by libpd.
Check the ScenePlayer project, it uses libpd and lets you record audio into a folder on sdcard:
https://github.com/libpd/pd-for-android/tree/master/ScenePlayer

Categories

Resources