I am trying to write a native android application in which I want to play couple of audio streams. In order to have proper synchronization with other audio streams in rest of the android system, I need to manage playback of these streams properly. While programming in java, android framework provides APIs like 'AudioManager.requestAudioFocus', 'AudioManager.abandonAudioFocus' and also provides appropriate callbacks according to behavior of other audio streams.
So, is there any possible way by means of which I can call these methods from a native code ?
It seems there is one more way of using OpenSL APIs. Does OpenSl provides methods similar to requestAudioFocus / abandonAudioFocus ?
Thanks in advance
Related
I am working on an android voip application that need not work on PSTN. I am completely novice to this field and any little help will be appreciated.
I started by researching how whatsapp voice call works and found out that it is using PJSIP which is open source sip stack library(Source: What's up with WhatsApp and WebRTC? - webrtcHacks). I also found that codecs are used in voip to compress and then decompress the voip packets.
Knowing that I am extremely comfused betweet those sip libraries and codec. Do an android voip app have to have implement sip library? Every sip library supports a few codec.
Is there any general format by which I can integrate any codec within my android app whether it is OPUS or Speex or anything like that which is independent of sip implementation?
May be I am sounding too confusing but that is true. Even googling so much on this specific topic did not help me and my last stop is this community. Any little guidance will be appreciated.
Yes, usually every app implements the codecs on their own. Some codec is available in the Android SDK but even in these cases a proper implementation is better.
G.711 (PCMU and PCMA) are very simple which can be implemented within a single java class (or even in a single function if you wish). The others are more complicated, but you can find open source implementations for almost each of them.
Also note that codec's are implemented also within PJSIP, so if you are using this library then you already have the most popular codec's available.
Im fairly new to Android development but from what I read I am pretty sure there is no direct API access to the in-call audio stream. I need to make something very close to an answering machine so I was wondering if there is any API support to pass an audiostream to the microhpone internally and use that as a workaround for not having access to the call stream. IE someone calls, we get the answer, mute the microphone so that he cant hear the environment and pass an alternative audiostream(some kind of recording) to the microphone while it is muted...That was the only workaround I can think of and I have no idea if something like that is possible, but I will appreciate any feedback. Thanks!
No, Android does not support injection of audio into the voice call uplink.There are some mobile platforms that support this functionality, but it's not something that's available to app developers since there's no API in place for it, regardless of whether you're using the Java APIs or the native APIs.
The accepted answer isn't correct now. I don't know maybe it was true 9 years ago but I know that it's possible to inject audio onto uplink using tinyalsa library.
I want to develop a program for a video calls in android. I thought of using the built in sip that introduced in android 2.3.3. But how can I initiate the video calls? I see that it is not supported.
I believe the generic Android SIP stack supports video.
Taken from:
https://developer.android.com/reference/android/net/sip/package-summary.html
If you want to create generic SIP connections (such as for video calls
or other), you can create a SIP connection from the SipManager,
using open(). If you only want to create audio SIP calls,
though, you should use the SipAudioCall class, as described
above.
If you don't mind using external SIP stacks, check out this:
http://www.youtube.com/watch?v=g1NHEsXFEns
which uses Jain-SIP.
EDIT: As of late, this project seems to be the leader in the native Android SIP space:
https://code.google.com/p/csipsimple/ - open source, and they offer everything you need to make voice and video calls.
Since Android API 12, RTP is supported in the SDK, which includes RtpStream as the base class, and AudioStream, AudioCodec, and AudioGroup. However, there is no documentation, examples, or tutorials to help me use these specific APIs to take input from the device's microphone, and output it to an RTP stream.
Where do I specify using the mic as the source, and not to use a speaker? Does it perform any RTCP? Can I extend the RtpStream base class to create my own VideoStream class (ideally I would like to use these for video streaming too)?
Any help out there on these new(ish) APIs please?
Unfortunately these APIs are the thinnest necessary wrapper around native code that performs the actual work. This means that they cannot be extended in java, and to extend them in C++ you would have to have a custom Android version I believe.
As far as I can see the AudioGroup cannot actually be set to not output sound.
I don't believe it does an RTCP but my use of it doesn't involve RTCP so I would not know.
My advice is that if you want to be able to extend functionality or have greater flexibility, then you should find a C or C++ native library that someone has written or ported to Android and use that instead, this should allow you to control what audio it uses and add video streaming and other such extensions.
Is there a documentation explaining android Stagefright architecture?
Can I get some pointers on these subjects?
A good explanation of stagefright is provided at http://freepine.blogspot.com/2010/01/overview-of-stagefrighter-player.html.
There is a new playback engine implemented by Google comes with Android 2.0 (i.e, Stagefright), which seems to be quite simple and straightforward compared with the OpenCORE solution.
MediaExtractor is responsible for retrieving track data and the corresponding meta data from the underlying file system or http stream;
Leveraging OMX for decoding: there are two OMX plugins currently, adapting to PV's software codec and vendor's hardware implementation respectively. And there is a local implementation of software codecs which encapsulates PV's decoder APIs directly;
AudioPlayer is responsible for rendering audio, it also provides the timebase for timing and A/V synchronization whenever audio track is present;
Depending on which codec is picked, a local or remote render will be created for video rendering; and system clock is used as the timebase for video only playback;
AwesomePlayer works as the engine to coordinate the above modules, and is finally connected into android media framework through the adapter of StagefrightPlayer.
Look at this post.
Also, Android player is built up using PacketVideo (PV) Player, and here comes the docs about it (beware of really slow transfer speed :) ):
PVPlayer SDK Developer's Guide link 1, link 2
PVPlayer return codes link
Starting Gingerbread, it is Stagefright framework instead of PV framework. Above link has good info about the framework. If you have some specific questions, I may be able to help you out.
Thanks, Dolphin