Im fairly new to Android development but from what I read I am pretty sure there is no direct API access to the in-call audio stream. I need to make something very close to an answering machine so I was wondering if there is any API support to pass an audiostream to the microhpone internally and use that as a workaround for not having access to the call stream. IE someone calls, we get the answer, mute the microphone so that he cant hear the environment and pass an alternative audiostream(some kind of recording) to the microphone while it is muted...That was the only workaround I can think of and I have no idea if something like that is possible, but I will appreciate any feedback. Thanks!
No, Android does not support injection of audio into the voice call uplink.There are some mobile platforms that support this functionality, but it's not something that's available to app developers since there's no API in place for it, regardless of whether you're using the Java APIs or the native APIs.
The accepted answer isn't correct now. I don't know maybe it was true 9 years ago but I know that it's possible to inject audio onto uplink using tinyalsa library.
Related
I am working on an android voip application that need not work on PSTN. I am completely novice to this field and any little help will be appreciated.
I started by researching how whatsapp voice call works and found out that it is using PJSIP which is open source sip stack library(Source: What's up with WhatsApp and WebRTC? - webrtcHacks). I also found that codecs are used in voip to compress and then decompress the voip packets.
Knowing that I am extremely comfused betweet those sip libraries and codec. Do an android voip app have to have implement sip library? Every sip library supports a few codec.
Is there any general format by which I can integrate any codec within my android app whether it is OPUS or Speex or anything like that which is independent of sip implementation?
May be I am sounding too confusing but that is true. Even googling so much on this specific topic did not help me and my last stop is this community. Any little guidance will be appreciated.
Yes, usually every app implements the codecs on their own. Some codec is available in the Android SDK but even in these cases a proper implementation is better.
G.711 (PCMU and PCMA) are very simple which can be implemented within a single java class (or even in a single function if you wish). The others are more complicated, but you can find open source implementations for almost each of them.
Also note that codec's are implemented also within PJSIP, so if you are using this library then you already have the most popular codec's available.
I am trying to write a native android application in which I want to play couple of audio streams. In order to have proper synchronization with other audio streams in rest of the android system, I need to manage playback of these streams properly. While programming in java, android framework provides APIs like 'AudioManager.requestAudioFocus', 'AudioManager.abandonAudioFocus' and also provides appropriate callbacks according to behavior of other audio streams.
So, is there any possible way by means of which I can call these methods from a native code ?
It seems there is one more way of using OpenSL APIs. Does OpenSl provides methods similar to requestAudioFocus / abandonAudioFocus ?
Thanks in advance
I'm writing a small call recording library for my rooted phone.
I saw in some application that recording is done through ALSA or CAF on rooted phones.
I couldn't find any example / tutorial on how to use ALSA or CAF for call recording (or even for audio recording for that matter).
I saw tinyAlsa lib project, but I couldn't figure how to use it in an android app.
Can someone please show me some tutorial or code example on how to integrate ALSA or CAF in an Android application?
Update
I managed to wrap tinyAlsa with JNI calls. However, calls like mixer_open(0) returns null pointers, and calls like pcm_open(...) returns a pointer but subsequent call to is_pcm_ready(pcm) always returns false.
Am I doing something wrong? Am I missing something?
Here's how to build ALSA lib using the Android's toolchain.
and here you can find another repo mentioning ALSA for android
I suggest you to read this post on order to understand what are your choices and the current platform situation.
EDIT after comments:
I think that you need to implement your solution with tinyalsa assuming by you are using the base ALSA implementation. If the tiny version is missing something then you may need to ask the author (but it sounds strange to me, because you are doing basic operations).
After reading this post, we can get some clues about why root is needed (accessing protected mount points).
Keep us updated with your progress, it's an interesting topic!
Hello everyone i am new to android ndk and want to find two things:
Can we detect whether the microphone is on or not?
Can we detect which application is using the microphone?
It would be good if anyone knows any idea how to do above through C or through android sdk.
1.
AudioManager am = Context.getSystemService(Context.AUDIO_SERVICE)
am.isMicrophoneMute()
But I don't know how to do it using NDK (and no one seems to know).
AudioManager documentation
2.
No you can't.
And that is a security decision. As #AndiJay said, malicious programs could then be used.
Can somebody give me some direction on how to synthesize sounds of instruments (Piano, Drums, Guitar, etc...)
I am not even sure what to look for.
Thanks
Not sure if this is still the case but Android seems to have latency issues that inhibit it from being able to do true sound synthesis. NanoStudio, in my opinion, is the best audio app on the iOS and the author so far refuses to make an Android version because the framework isn't there yet.
See these links:
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=nanostudio+android#hl=en&q=+site:forums.blipinteractive.co.uk+nanostudio+android&bav=on.2,or.r_gc.r_pw.&fp=ee1cd411508a9e34&biw=1194&bih=939
It all depends on what kind of application you're making, if it's going to be a Akai APC firing off sounds you could be alright. If you're after true synthesis (crafting wave forms so they replicate pianos, guitars, and drums), which is what JASS mentioned above does, then Android might not be able to handle it.
If you're looking for a guide on emulating organic instruments via synthesis check out the books by Fred Welsh http://www.synthesizer-cookbook.com/
Synthesizing a guitar, piano, or natural drums would be difficult. Triggering samples that you pass through a synthesis engine less so. If you want to synthesize analog synth sounds that's easier.
Here is a project out there you might be able to grab code from:
https://sites.google.com/site/androidsynthesizer/
In the end if you want to create a full synthesizer or multi-track application you'll have to render your oscillators + filters, etc into an audio stream that can be piped into the MediaPlayer. You don't necessarily need MIDI to do that.
Here is one persons experience:
http://jazarimusic.com/2011/06/audio-on-android-a-developers-perspective/
Interesting read.
Two projects that might be worth looking at JASS (Java Audio Synthesis System) and PureData . PureData is quite interesting though probably the harder path.
MIDI support on Android sucks. (So does audio support in general, but that's another story.) There's an interesting blog post here that discusses the (lack of) MIDI capabilities on Android. Here's what he did to work around some of the limitations:
Personally I solved the dynamic midi generation issue as follows: programmatically generate a midi file, write it to the device storage, initiate a mediaplayer with the file and let it play. This is fast enough if you just need to play a dynamic midi sound. I doubt it’s useful for creating user controlled midi stuff like sequencers, but for other cases it’s great.
Android unfortunately took out MIDI support in the official Java SDK.
That is, you cannot play audio streams directly. You must use the provided MediaStream classes.
You will have to use some DSP (digital signal processing) knowledge and the NDK in order to do this.
I would not be surprised if there was a general package (not necessarily for Android) to allow you to do this.
I hope this pointed you in the right direction!