I'm creating an app that interacts with the phone stream, and I'm wondering if this is a reasonable workaround to be able to play audio into the call stream
I know that it isn't directly possible using the API, but is it possible to create a fake Bluetooth device that has a virtual input? I figure it is a similar problem to not being able to answer calls via the API, but you can create a virtual Bluetooth device that sends a fake keypress, which picks up the call.
Or is there another workaround that may do what I'm trying to accomplish?
Related
I am trying to build a calling app that will pass the phone call audio output, input to and from the computer respectively.
For this, I am creating a server on a computer that will use WebSockets to communicate with the android device over WLAN.
But I am not able to find a way to get audio input and output from the phone call.
Any advice on how to approach this would be hugely appreciated.
I'm trying to send audio over my cars speakers without having the cars audio input set to bluetooth. It supports the A2DP profile, and I also managed to get it to work. But that only works when I manually set the input to Bluetooth.
But I would like to 'force' the audio being sent over the speakers.
I previously had an iPhone. The Google Maps app would 'call' itself everything it would pronounce the directions. So my car would see it is an incoming call and play the audio over the speakers.
I looked around the internet, and it seems I need to use the HSF profile to pull of the same trick.
The documentation states that HSF is supported, but it does not show me how to do it. I also found exactly what I needed in the documentation. It states the following:
NOTE: up to and including API version JELLY_BEAN_MR1, this method initiates a virtual voice call to the bluetooth headset. After API version JELLY_BEAN_MR2 only a raw SCO audio connection is established.
So initiating virtual voice calls was possible. I would like to know how to do that now. Any other ideas on how to do this would also be very helpfull.
I would like to create app, which would simulate in-comming call on handsfree device based on some app events. I found that there are main two ways. First approach use to create RFCOMM and communicate with HFP over AT commands. Unfortunately I spent lot of time with this approach without success. I'm able to establish RFCOMM but not able to receive any AT commands. Now, I'm thinking to use second approach. If I would be able to simulate in-comming call, it would start vibrate the handsfree automatically. Don't you know, if is possible to do that? Probably to broadcast intent, that phone receiving call?
is it possible in android to make a phone call or SIP call and play a soundfile after the call is established? Other option that would be ok for me is that after the established call the TTS engine reads some text so that the person on the other side could hear that.
Is this possible?
Thanks!
If you mean played locally (i.e. only you can hear it), then sure. That should be working without using any special tricks.
If you mean inject audio into the uplink so that the other party can hear it, then no - at least not during a normal voice call. Perhaps it would be possible during a SIP call if you implement the whole SIP stack yourself and generate the audio packets in your app. I'm not really familiar with how SIP calls works, so I can't say whether that would work or not.
I'm developing an AIR for Android application, and am current sending audio to fms servers via standard NetStream/Microphone options. I (ignorantly) assumed that attaching a bluetooth device would be pretty simple, and connecting it would make it show up as a native "Microphone". Unfortunately, it does not.
I don't think it is even possible to use Netstream.publish and publish raw bytes, so the only hope is that there's a way to use NativeProcess + Java to create a native microphone "handle" that AIR can pick up on.
Has anyone run into this issue?
I think one possible solution would be using NetConnection.send() instead of Netstream.publish().
You should get sound data from your BT microphone. I am not sure if you can get using AIR. You may need to use an android service that gets the sound data and feeds your AIR app via a file, a UDP port or an invoke etc.
When you get some sound data, encode it so flash can play it (Speex, Nellymoiser, etc) You can do the encoding in your Android service as well.
Whenever your AIR app receives a sound data, send it to your streaming server via NetConnection.Send().
Extend your streaming server to process sound data received. You can embed it into a flv stream, or send to other flash clients if it is a chat app.
Other than that, I can't find a way to have a "microphone handle" for your BT microphone. I once thought of creating a virtual device on Android, but I couldn't find any solution.