How the voice call audio is transferred from modem to the AP side in Qualcomm chipsets in Android? And how the call audio gets played in the AP side? Source code flow would help. Any references?
I'm expecting a source code flow analysis.
Related
I have a question about a2dp sink profile in Android. I'm connected to other bluetooth device and I'm trying to play something from it but I can't hear sound.
Which place should I write some debugs to see that audio data are received and passed to rest of audio stack? I mean where are callbacks with audio data?
I was checking audio_a2dp_hw hal to it looks like it's not active in this type of connection.
Regards
I'm trying to send audio over my cars speakers without having the cars audio input set to bluetooth. It supports the A2DP profile, and I also managed to get it to work. But that only works when I manually set the input to Bluetooth.
But I would like to 'force' the audio being sent over the speakers.
I previously had an iPhone. The Google Maps app would 'call' itself everything it would pronounce the directions. So my car would see it is an incoming call and play the audio over the speakers.
I looked around the internet, and it seems I need to use the HSF profile to pull of the same trick.
The documentation states that HSF is supported, but it does not show me how to do it. I also found exactly what I needed in the documentation. It states the following:
NOTE: up to and including API version JELLY_BEAN_MR1, this method initiates a virtual voice call to the bluetooth headset. After API version JELLY_BEAN_MR2 only a raw SCO audio connection is established.
So initiating virtual voice calls was possible. I would like to know how to do that now. Any other ideas on how to do this would also be very helpfull.
i have a Bluetooth module and micro controller to decode the music. but i don't know how the music is send serially. i have searched for this problem. but i didn't get anything useful.
i need to make a Bluetooth music player system using micro controller. my aim is to play music wireless.
i want to know how the mp3 files are sending in a android device. how the song is encoded. and the idea of decoding the data. thank you
my Bluetooth module is HC-05
and i'm using 8051 micro controller
its for my project
Use the AudioTrack.write() method to write music data out to a stream. You can choose the encoding, sample rate, channels, etc. when you create the AudioTrack object. If you are just sending the data over a serial connection you should be able to do whatever you want. If you are sending audio to a headset, it has to be mono, 8000 bps, PCM modulated.
most probably it will be in PCM or PWN at the receiving end i.e Micro controller, connect DAC amplify analog signal it should work.
I want to know in Android it is possible to transmit modulated voice like Ultra voice changer app does during a call. I have searched a lot but I only got results for how to change voice after recording. So, please reply me that is it possible to transmit changed voice at a time of call in Android.
It seems not possible. According to this XDA-Post "The calling screen is built within the phone". You can replace a dialer, but are not able to intercept the voice spoken during a call.
I cannot find any official API from android, which would make it possible to write your own "calling" App (which means record voice and send it).
The GSM full rate speech codec operates at 13 kbits/s and uses a Regular Pulse Excited (RPE) codec - This means that the microphone and speech detection in GSM is optimised for transmission across a Time Division Multiplexed 'digital' channel that is then modulated across the air interface using GMSK, a continuous-phase frequency-shift keying modulation scheme.
Noises other than the 'average' speech pattern are heavily distorted (or suppressed) - For instance DTMF (tones) are not well received on a device, and must be transmitted by the network core, but tones designed for the hearing impaired work well. Voice is shaped (filtered) on entry to the codec (microphone design) for best codec detection and reproduction at the other end.
In summary - It is not possible to 're-modulate' across the GSM system, because the entry point is not a radio (air interface), or even access the GSM digital frame. Your only access for a voice call is the GSM codec which is expecting a voice in a confined audio spectrum.
I know there are apps that act like voice changer where they change your voice and transmit that thru gsm voice. Maybe you can make an app that takes the voice then modulate it to something like phase shift keying or digital radio monodiale like what hams used for digital vhf, hf, dpmr, mototrbo radio communication and transmit the audio thru gsm voice channel then demodulate it back to normal voice but instead of straight forward modulation demodulation you can add pgp, aes, pre shared keys, blowfish or whatever encryption you like. I'm also intrested to see a project like this.
I think it would also be great if we could use this to transmit data thru voice gsm channel like the 56k dial up modems in the past instead of the gprs data channel allowing you to make data connection to other cellphone to transfer files without incurring additional data charges which is really good for subscribers with unlimited call plans.
See reference:
http://freedv.org/tiki-index.php
http://www.aprs.org
Some months ago, with Android ICS (4.0), I developed an android kernel module which intercepted the "pcmC0D0p"-module to fetch all system audio.
My target is to stream ALL audio (or at least the played music) to a remote speaker via AirPlay.
The kernel module worked, but there where several problems (kernel-versions, root-privileges etc.) so I stopped working on this.
Now, we have Android 4.1 and 4.2 and I have new hope!
Who has an idea how to capture the audio in Android?
I had following ideas:
Connect via bluetooth to the same phone, set routing to BT and grab the audio on the "other end": this shouldn't work
Intercept the audio with a kernel module like done before: hardcore, get it worked but not applicable
JACK Audio Connection Kit: sadly Android uses "tinyALSA" and not "ALSA". TinyALSA does NOT support any filters like JACK (but this brought the idea with the kernel module)
Use PulseAudio as a replacement for AudioFlinger, but this is also not applicable
EDIT (forgot them):
I compiled "tinymix" (baby-version of ALSA-mixer) from tinyALSA (the ALSA on Android) and tried to route the audio-out to mic-in - but with no success (not understandable for me). And this also needs rooting: not applicable
I tested OpenSL ES, but I'm not a C-crack and it ended in "I can record microphone, but no more" (maybe I was wrong?)
I just found ROUTE_TYPE_LIVE_AUDIO:
A device that supports live audio routing will allow the media audio
stream to be routed to supported destinations. This can include
internal speakers or audio jacks on the device itself, A2DP devices,
and more.
Once initiated this routing is transparent to the application. All
audio played on the media stream will be routed to the selected
destination.
Maybe this helps in any way?
I'm running out of ideas but want to "crack this nut", maybe someone can help me?
EDIT:
I'm really new in C & kernel-coding (but I successfully created a cross-compiled audio-interception-module) - but isn't it in any way possible to listen at the point the PCM-data goes from userspace (JAVA, C-layer?) to the kernel-space (tinyALSA, kernel-module), without hacking & rooting?
You must pass audio traffic through your local socket server:
Create local socket server
Modify audio http stream address to path is thru your local socket server, for example for address
http://example.com/stream.mp3 ->
http://127.0.0.1:8888/http://example.com/stream.mp3
where 8888 is your socket port.
Play
http://127.0.0.1:8888/http://example.com/stream.mp3
with default media player
Read all incoming requests to port 8888 in your local socket server, analyze it and pass to example.com server.
Read response from example.com and pass it to local client (media player). HERE you can capture audio stream!