Since Android API 12, RTP is supported in the SDK, which includes RtpStream as the base class, and AudioStream, AudioCodec, and AudioGroup. However, there is no documentation, examples, or tutorials to help me use these specific APIs to take input from the device's microphone, and output it to an RTP stream.
Where do I specify using the mic as the source, and not to use a speaker? Does it perform any RTCP? Can I extend the RtpStream base class to create my own VideoStream class (ideally I would like to use these for video streaming too)?
Any help out there on these new(ish) APIs please?
Unfortunately these APIs are the thinnest necessary wrapper around native code that performs the actual work. This means that they cannot be extended in java, and to extend them in C++ you would have to have a custom Android version I believe.
As far as I can see the AudioGroup cannot actually be set to not output sound.
I don't believe it does an RTCP but my use of it doesn't involve RTCP so I would not know.
My advice is that if you want to be able to extend functionality or have greater flexibility, then you should find a C or C++ native library that someone has written or ported to Android and use that instead, this should allow you to control what audio it uses and add video streaming and other such extensions.
Related
How to implementing a custom audio HAL in android?
What are the different approaches to implement the audio HAL in android Ex. TinyHAL, UCM etc. I am looking for various approaches to suite my requirement.
First and the most important question you should ask yourself is if you really need to implement your own HAL.
Secondly, there is a mandatory reading - Audio Architecture in Android.
Next, check the source code e.g. for one of the Nexus devices, this can be treated as a reference design.
After lots of code writing and lots of debugging pass CTS testsuit, to make sure you are compliant with the Android OS.
I am working on an android voip application that need not work on PSTN. I am completely novice to this field and any little help will be appreciated.
I started by researching how whatsapp voice call works and found out that it is using PJSIP which is open source sip stack library(Source: What's up with WhatsApp and WebRTC? - webrtcHacks). I also found that codecs are used in voip to compress and then decompress the voip packets.
Knowing that I am extremely comfused betweet those sip libraries and codec. Do an android voip app have to have implement sip library? Every sip library supports a few codec.
Is there any general format by which I can integrate any codec within my android app whether it is OPUS or Speex or anything like that which is independent of sip implementation?
May be I am sounding too confusing but that is true. Even googling so much on this specific topic did not help me and my last stop is this community. Any little guidance will be appreciated.
Yes, usually every app implements the codecs on their own. Some codec is available in the Android SDK but even in these cases a proper implementation is better.
G.711 (PCMU and PCMA) are very simple which can be implemented within a single java class (or even in a single function if you wish). The others are more complicated, but you can find open source implementations for almost each of them.
Also note that codec's are implemented also within PJSIP, so if you are using this library then you already have the most popular codec's available.
I am building a custom audio player with MediaCodec/MediaExtractor/AudioTrack etc. which mixes and plays multiple audio files.
Therefore I need a resampling algorithm, if one of the files has a different samplerate.
I can see that there is a native AudioResample class available:
https://android.googlesource.com/platform/frameworks/av/+/jb-mr1.1-release/services/audioflinger/AudioResampler.h -
But so far I did not find any examples how it can be used.
My question:
Is it possible to use the native resampler on Android? (in Java or with JNI)
If yes, does anyone know an example out there? Or any docs how one can use this custom AudioResampler class?
Thanks for any hints!
This is not a public API, so you can't officially rely on using it (and even unofficially, using it would be very hard). You need to find a library (ideally in C, for NDK) to bundle within your app.
I am trying to write a native android application in which I want to play couple of audio streams. In order to have proper synchronization with other audio streams in rest of the android system, I need to manage playback of these streams properly. While programming in java, android framework provides APIs like 'AudioManager.requestAudioFocus', 'AudioManager.abandonAudioFocus' and also provides appropriate callbacks according to behavior of other audio streams.
So, is there any possible way by means of which I can call these methods from a native code ?
It seems there is one more way of using OpenSL APIs. Does OpenSl provides methods similar to requestAudioFocus / abandonAudioFocus ?
Thanks in advance
I try to record audio using android ndk. people say I can use "frameworks/base/media/libmedia/AudioRecord.cpp". but it is in kernel. how can I access and use it?
The C++ libmedia library is not part of the public API. Some people use it, but this is highly discouraged because it might break on some devices and/or in a future Android release.
I developed an audio recording app, and trust me, audio support is very inconsistent across devices, it's very tricky, so IMO using libmedia directly is a bad idea.
The only way to capture raw audio with the public API is to use the Java AudioRecord class. It will gives you PCM data, which you can then choose to pass to your C optimized routines.
Alternatively, although that's a bit harder, you could write a C/C++ wrapper around the Java AudioRecord class, as it is possible to instantiate Java objects and call methods through JNI.
May be a little-bit outdated but:
the safest way of playing/recording audio in native code is by using OpenSL ES interfaces.
Nevertheless it's available only on android 2.3+ and for now works over generic AudioFlinger API.
the more robust and simple way is using platform source-codes to get AudioFlinger headers and some generic libmedia.so used for linking on the build stage.
Device-dependent libmedia.so should be preloaded at application initialization stage for AudioFlinger to work normally (generally it is done automatically). Take a note that some vendors try changing AudioFlinger internals (by ambiguous reasons), so you may encounter some memory or behavior issues.
In my experience AudioFlinger worked on all (2.0+) devices but sometimes required allocating more memory for the object than it was supposed by default implementation.
Finally saying OpenSL ES is a wrapper with dynamically loadable C-interface which allows using it with any particular AudioFlinger implementation. It is pretty complicated for simple usage, and may have even more overhead than using Java AudioTrack/AudioRecord because of internal threading, buffering, etc..
So consider using Java or not-so-safe native AudioFlinger until Google implements some high-performance audio interface (which is doubtful for now).
The OpenSL Es API is available from Android 2.3 on (API-Level 9)