Capturing unprocessed / raw microphone data on Android devices - android

This is a question in regards with capturing audio content on recent Android devices. I'm working at developing an Android app that must record unprocessed microphone audio. In the process of trying to figure out how this could be achieved, I came across the Android Compatibility Definition Documents (CDDs) which, since version 5.0 (API 21), state that “device implementations that declare android.hardware.microphone MUST allow capture of raw audio content [...]” (see any CDD from 5.0 and higher, under the Audio Recording section : https://source.android.com/compatibility/cdd).
On the other hand, I have found that there are several audio sources that can be selected under the MediaRecorder APIs and that in order “To record raw audio select UNPROCESSED. Some devices do not support unprocessed input. Call AudioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED) first to verify it's available” (source : https://developer.android.com/guide/topics/media/mediarecorder). The PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED property was added in API 24.
Here’s where I would need help : I am confused about the fact that the CDDs indicate that capturing raw data MUST be allowed while the MediaRecorder does not always allow access to it. Perhaps is it that “raw” is not the same as “unprocessed”? Or perhaps this means that raw/unprocessed audio data is for system-level functions only, and not available to third-party developers.
Please note that, when I tested in order to verify if the unprocessed property was available on 3 different Android devices (including the Galaxy S8 with API 26), I never got a positive result.
This being said, can someone clarify why CDDs indicate that capture of raw audio content MUST be available when the unprocessed AudioMagager API property rarely seems to be available. Should I be accessing the raw/unprocessed audio content differently?
Thanks in advance!

The Android CDDs are for developers intending to port the Android platform to other devices/platforms and maintain cross compatibility for app developers. This means that the standard, lower level Android libraries will be able to access the functionality defined in the CDD, but you won't necessarily be able to.
I managed to find an example here from androidcookbook.info that goes through getting raw, uncompressed audio via AudioRecord.
If that doesn't suit your needs then I would 100% recommend checking out OpenSL ES if you haven't looked at/used it yet. It uses the NDK (in C++) to provide a native interface so latency is much lower and you have access to more functionality for audio processing. Here' google's audio-echo project on GitHub that shows recording and playback in real-time using the native processing route with OpenSL ES.
I am sure you have access to the uncompressed audio by using it. You can pass the information to and from Java / C++ as you wish.

Related

How to record 4th input channel audio

I have an audiocard with 4 input channels: mono, stereo, 3 , 4. Is there any sdk way to record 4th channel data?
Now I just can record only mono/stereo by AudioRecord.
According to the Android docs for AudioRecord, the only channel input configuration's available specified are AudioFormat.CHANNEL_IN_MONO or AudioFormat.CHANNEL_IN_STEREO.
However, it seems that much higher number of channel outputs are supported according to the AudioFormat docs (or are at least planned to be implemented if not already).
I think that this would be a challenge on its own to implement it yourself, but could be worthwhile doing (I didn't manage to see any obvious solutions for this or any open source code yet). However, an example on the app store does exist, so it is possible via the USB interface according to the USB Audio Recorder Pro App.
JUCE (a mostly audio based C++ library that can compile directly to Android .apk s) seems to be working on this as well but I haven't seen a solution to this yet (maybe in the very near future).
I think you would have to go directly into OpenSL (C++ with JNI) to get the raw audio being received and then pass this back into Java to do whatever you wanted to do with it. Probably worth investigating OpenSL recording through USB devices with something like this to get started.
I know this is a lot of links, but hopefully it will get you started if you did want to implement this functionality (comment a link if you ever did start it).
Otherwise I hope this helped anyways!
Have recorded 4th channel audio. I use tinyalsa through jni. github.com/tinyalsa/tinyalsa
In this way, data is recorded from /dev/snd/pcmC1D0c, but it's needed root access or a+rw rights on pcmC1D0c.

Why it is not possible to play an audio file on a voice call in android

This question might seem to be a repetition of the questions such as following:
How to play an audio file on a voice call in android
Background Audio for a Call in Progress - Possible?
The answers of these questions suggests that it is not possible to play a pre-recorded audio on a voice call in android. I want to know why it is not possible? What is the limitation (hardware/software)? Is it really a limitation or done purposely? Can we alter the source code of android to make it possible?
I think this is a limitation, imposed for security reasons and restricted at the OS level.
Let's analyze the security threat, first of all. If you were able to play custom audio files to the callee, a whole world of cons opens up: you could trick customer supports, you could pretend to be someone else, you could give unauthorized purchase confirmations, and so on. For this reason, neither Android nor iOS allows this functionality.
On Android, you won't be able to do so in a programmatic way, simply because the current APIs won't allow you to do so. It is stated in the official documentation as well, as pointed out here. If you dig into the source code, you can probably enable this feature by accessing the microphone output during a phone call, but that would require running your custom version of Android. A good starting point would be the AudioTrack source, available here.
EDIT: a good example of an audio mod involves enabling the Nexus 5 earpiece as a second loudspeaker (requires root). Can be found here.
After a thorough research, what I have come to know is that there are more than one limitations/hurdles to make it possible. These limitations/hurdles are at three different levels.
First limitation is at API level, because there is no high-level API to play sound files in the conversation audio during a call as mentioned in Android official documentation.
Second limitation is at Radio Interface Layer (RIL). RIL passes on complete control of the call to Radio Daemon (rild) of the Linux library which then further passes the control to the vendor RIL. That means we cannot manipulate voice call in android source code.
Even if we are able to remove these two limitations, we may still not be able to play audio file to an ongoing voice call. Because there is a third limitation. Every vendor has their own library of RIL that communicates with Radio Daemon (rild). This requires that vendor RIL to be open source which is not actually. Hardware vendors do not usually make their device drivers code available.
Detail discussion on this topic is present at this link.
This is software related due to the prioritization of audio routing in Android.
Take a look into the CallManager where you can dig into the method setAudioMode(). After the audio mode was set to MODE_IN_COMMUNICATION the following code is called
audioManager.requestAudioFocusForCall(AudioManager.STREAM_VOICE_CALL,
AudioManager.AUDIOFOCUS_GAIN_TRANSIENT);
From this point on the telephony service has the highest priority and won't let any other audio play in parallel.
Note: You can play back the audio data only to the standard output device. Currently, that is the mobile device speaker or a Bluetooth headset. You cannot play sound files in the conversation audio during a call.
See official link
http://developer.android.com/guide/topics/media/mediaplayer.html
By implementing the AudioManager.OnAudioFocusChangeListener you can get the state of the audiomanager. so by this if any music is playing in the background you can get the AudioManager states(playing and pausing is completely in developer hands) similarly......
Some of the native music players in android device where handling this, they restrict the music when call is in TelephonyManager.EXTRA_STATE_OFFHOOK.so this scenario is also completely in developer hand (whether to handle or not) if he is not handling both will play parallel y

How Lollipop can play compressed Audio/Video

From
Android Developers - Enhanced camera & video,
Android 5.0 also adds support for multimedia tunneling to provide the
best experience for ultra-high definition (4K) content and the ability
to play compressed audio and video data together.
Any details or specifics of how Lollipop can play compressed Audio/Video? Also, what are the changes regarding that compared with earlier versions?
From the sources, there is some amount of understanding as captured in this post on Google groups: https://groups.google.com/forum/#!topic/android-platform/isNabAHLLks
To summarize, the implementation of the video tunneling is vendor/device specific implementation.
EDIT: The aforementioned information is for the new feature Video Tunneling introduced in Lollipop. The normal playback is handled through NuPlayer unless there is a system property employed to specifically use StagefrightPlayer as shown here.
you can get clearly concept from
https://medium.com/google-exoplayer/tunneled-video-playback-in-exoplayer-84f084a8094d
If you want to know more detail, I think you could download
AOSP project to trace the tunnel mode APIs.

Adaptive (multibitrate) streaming for Android

Is it possible to perform adaptive (multibitrate) streaming onto an Android device? If yes, how to do that?
If you have 4.0 or 3.2 you just use access the adaptive stream as you would any other video. Literally.
It's a HTTP access.
So if you use as a data source //mywebsite/video1.mp4 you woulduse as a data source the equivalent //mywebsite/video1.m3u8. Now, I'm not including any discussion on how you create your streaming file but only how you would access it.
All the magic happens within the client (ex: mediaplayer, videoview) supported on 4.0 and 3.2. For the record, you may be able to access and run streaming segments (.m3u8 files) on earlier versions of Android because the manufactures have sometimes played around with the code. But I haven't found any that actually adapt. They usually stick to the first segment they run or default to the lower bitrate segment in the bunch and stay there regardless of bitrate.

add a new codec to Android?

I'm very new to Android. Now i need to work on Adding my own codec to Android.I mean, I want to know what all the steps i need to take to add my own codec to android.
I'm very fresh to this i need some basic things, so can someone please explain the steps I need to take in order to add a new codec to Android?
This is virtually impossible to do in a portable way as all audio and video codecs are compiled at the platform level (due to the fact that most of the time they require hardware specific acceleration)
If you are only interested on this working on a specific hardware platform and have an unlocked bootloader (So you can boot a custom build of Android) you can compile the full Android platform from scratch using the AOSP as a base.
Depending on which version of Android you're targeting you're looking at adding code to either Opencore or Stagefright (The subsystems that Android uses for A/V decoding and parsing) here you can add audio decoders, audio encoders, video encoders, video decoders and container parsers.
Here is some discussion of adding to Stagefright:
http://freepine.blogspot.com/2010/01/overview-of-stagefrighter-player.html
http://groups.google.com/group/android-porting/msg/5d88e76845a22bbb
However unless the encoding scheme you wish to support is very simple (What are you wanting to add?) it is likely to be too CPU intensive for most Android devices to run without being able to offload some of the work to another system (like the radio chipset or the GPU).
In Android Framework, mediacodec is implemented as Stagefright, a media playback engine at the native level that has built-in software-based codecs for popular media formats. Stagefright also supports integration with custom hardware codecs provided as OpenMAX component. Here is the article which summaries the steps.
CHECK HERE

Categories

Resources