Audio track allows streaming PCM audio buffers to the audio hardware for playback. This is achieved by "pushing" the data to the Audio Track object using one of the write(byte[], int, int) and write(short[], int, int) methods.
PCM is Pulse code modulation is it that PCM that modulate analog voice to digital during Phone calls?and which list of hardware to playback? only speaker or some more? Can PCM allow to play audio on GSM network?
PCM is Pulse code modulation is it that PCM that modulate analog voice to digital during Phone calls?
I'm not entirely sure what you're asking, but typically there will be a piece of hardware in the phone called a codec. The codec controls the internal loudspeaker, earpiece, microphones, etc - and this is where the analogue audio singals will be sampled into digital audio streams (typically 48 kHz linear PCM).
What happens to the audio after that varies a bit between different platforms. It might for example be passed to an audio DSP (Digital Signal Processor) which applies various filters (e.g. noise suppression); and from there to the modem which takes care of compressing the audio (typically using AMR) and transmitting it to the network.
and which list of hardware to playback? only speaker or some more??
Any device that the phone can route audio to. A phone will typically have an earpiece, one or more loudspeakers, a 3.5mm stereo jack, and a Bluetooth chip capable of both voice (SCO) and media (A2DP) audio. Some phones also support audio playback over USB and WiFi. Applications don't have direct control of the routing, though. The best you can do is give a hint to the OS about where you'd like the audio to be routed.
Can PCM allow to play audio on GSM network??
No. This has nothing to do with the encoding of the audio. There simply isn't any API in Android for injecting audio into the uplink of a voice call.
Related
I have attempted to find any info regarding the detection of the number of speakers available. When the AudioTrack streaming mode is set to music (allowing the phone's loud speakers to be used) I would like to detect if the sound will be coming from multiple speakers or only one.
We are developing a VOIP application, there is one component which need to record the audio from mic, and play the remote audio to speaker. And we need to do some audio/signal processing for the recorded audio.
But on some android device, the selected mic and speaker is so near, the audio captured from MIC clipping (too loud) because of the audio played by speaker. This cause the captured audio waveform have nonlinear losses, and make the audio/signal processing component doesn't work.
We doesn't want to set AUDIO_STREAM_VOICE_CALL to enable build-in AEC, because it will make the recorded audio sample rate to be 8k while I'd like the recorded audio to be 48k.
So We have consider following solution:
Decrease the mic volume. Base on this SO question and this discussion thread, it seams impossible.
Using specific speaker and mic to make the distance a little bit far, so the mic captured audio volume is low.
So any way to select specific speaker on android platform?
If the distance between microphone and the speaker is crucial here maybe is would be enough to use camera's mic:
MediaRecorder.AudioSource.CAMCORDER
I'm working on an Android program whose purpose is to record a sound signal and then to analyse it.
To record this sound signal, I am using a special microphone I want to plug to my Android device directly via the audio line-in, so I won't use the mobile phone microphone.
Here is what I found on Android website :
" Defines the audio source. These constants are used with setAudioSource(int).
Summary
Constants
int CAMCORDER Microphone audio source with same orientation as camera if available, the main device microphone otherwise
int DEFAULT Default audio source
int MIC Microphone audio source
int VOICE_CALL Voice call uplink + downlink audio source
int VOICE_COMMUNICATION Microphone audio source tuned for voice communications such as VoIP.
int VOICE_DOWNLINK Voice call downlink (Rx) audio source
int VOICE_RECOGNITION Microphone audio source tuned for voice recognition if available, behaves like DEFAULT otherwise.
int VOICE_UPLINK Voice call uplink (Tx) audio source "
It seems that there is no way to define the line in as an audio source. I have tried to plug my own microphone to my mobile and then to record the sound, hoping that it will recognize my own microphone (as a handfree kit) but it keeps recording on the mobile phone microphone.
Does anyone know what to set as an audio source so I can use the line-in ?
Many thanks in advance !
You made some assumptions which might not be correct. Regular android phones have no Line Ins.They are not designed to be professional audio systems.
When you plug your headset, android automatically recognises your external MIC and you dont need to set it in your code.Just leave it in DEFAULT.
The problem is, you will need an special and compatible connector for your MIC. As you might notice the headset connector for your mobile is different from an streo Jack.So you will need an adapter or solder your own compatible Jack.If you connect your MIC with two parts, android will assume that its a headphone and not a MIC!
Here is the link to article: How do I use an external microphone with my Galaxy Nexus?
Is this possible to play a pre-recorded sound in speaker or mic during a call.. so that other party could hear it?
Please refer to this first before reading through. The modem processor directly feeds from the mic and since there is not wiring between the app and modem processor, IMHO you cannot route the pre-recorded audio to the other end.
The other party can hear only what the microphone picks up. So, if you play a pre-recorded audio file while in a call on the speaker, and it is loud enough, the mic will pick up the audio and it will be heard at the other end. Not sure if echo cancellation will have an "adverse" effect in this case by eliminating the audio from the speaker.
So the idea is to use AudioRecord to get a stream of audio from the microphone and send this stream over bluetooth to another device in the form of a raw byte array, just like I get it from AudioRecord, and at the same time over bluetooth receive the stream of audio from the other device and play it with AudioTrack. Is Bluetooth fast enough to do this between two phones while keeping a descent audio quality? If not, is there a way to do this?
At the quality that phone microphones record at, it should be fine.
It's fast enough for your wireless cell and music headphones. It's fast enough for audio and video streaming.
Required bandwidth is ~700 kbit/s for 44.1kHz/16bit MONO stream. Stereo stream doubles that (~1.4 kbit/s).