I'm working on an audio recorder app that uses a bluetooth mic to record audio on to an Android device (Nexus 7 - rooted Android 4.4.2). It's currently implemented on HFP and everything is working fine. The bluetooth mic is implemented with Bluegiga's WT32 bluetooth module + a mic input, audio quality via HFP isn't great but it's sufficient for now.
However, I'm now trying to change the bluetooth profile to A2dp, since there are two mic inputs (L/R) and WT32 supports A2dp (source). After much research I found that stock Android doesn't support A2dp (sink), and it's possible to modify Android's bluetooth stack to enable A2dp (sink).
What I don't understand is how does one access and modify the bluetooth stack. It would be nice if someone with an answer is able to break-down the steps to achieve this.
I've tried following the answer to this question:
Receive audio via Bluetooth in Android, yet I can't seem to find the appropriate file to modify. Actually, I don't even know if I'm looking into the right folder. I've looked through the devices file via Android-studio's DDMS-File Explorer.
ps, I'm still fairly new with Android app development, so I may have misused some of the terminologies and I apologies in-advance for that.
So the above answer isn't totally correct.
The following is how it breaks down:
HAL, is the hardware abstraction layer that implements the actual Bluetooth state machines in c/cpp code, as such it controls the various state machines for A2dp, HFP, GATT, SPP, AVRCP, etc. services. Each of these services also reference SMP and ATT files for controlling the actual Bluetooth server or client databases, and there security.
HCI, is where the actual work gets done. the HAL doesn't really do anything, it assembles the complex data messages that get sent along a tty serial (either spi, or UART) to an inter-network connected chip on the PCBA via the methods used in the HCI layer, which can be found in the "BTE" layer in /external/bluetooth/bluedroid/ directory of an android compiling trunk from AOSP 4.2.2 to current. - currently there are several manufacturers of these chips but they are mostly all Broadcom based ic's packaged in a dual, or triple radio package that contains a wifi, Bluetooth 4.0 Smart, and Bluetooth 4.0 radio.
It is possible to do what you are trying to do, but you would need to include hardware.so, and bluetooth_jni.so into an NDK/JNI package/project that goes with your apps, and registers via the calls from the .cpp files for each of the Bluetooth services found in "Packages/apps/Bluetooth/jni", you would then handle the registration in your NDK library of 'com_android_bluetooth_a2dp.cpp', and 'com_android_bluetooth_avrcp.cpp', as their appropriately typed objects.
The other issue is, you will need to implement your own custom A2DP stack, as the Android Bluedroid stack only has bit's and piece's of the Sink role implemented in the frameworks while the A2DP role has a full implementation of Source role. Additionally, depending what you actually intend to do with your Bluetooth A2DP sink implementation you will need to implement AVRCP as well - as per the Bluetooth SIG (special interest group), there are inter-connectivity requirements between Bluetooth devices that will lead to major issues if you implement sink role, without AVRCP "remote control target device" and "remote control control device", as the sink role ATT commands from Bluetooth over A2DP (or any Bluetooth service/profile) execute certain handshakes during the service discovery process, when the associated gateway ( the connecting device ), executes a capabilities request the A2DP service is expected to implement i/o capabilities for Start Stop commands, and possibly skip/track advance commands.
Additional to all of this, when implementing A2DP you will need to choose whether you will be handling PCM streams or AAC streams. If you are handling AAC streams (or DRM protected PCM streams for that matter, which anything like Pandora, spotify, etc. uses), you need to implement the SBC Encoder or Decoder appropriate to you implementation, else all you will have is a bunch of encrypted data. Also, be sure to implement the bitrate at the appropriate speed for your devices AudioManager implementation, some phones use 48,000hZ and some us 44,100hZ, this is important if you want high quality audio as generally most PCM A2DP implementations that are utilizing Surround Sound 7.1+ will require 48,000hZ as well as AAC encoding/decoding.
I hope this provides you with some insight.
https://android-review.googlesource.com/#/c/98161/ implements A2DP Sink. It works on Nexus 5. You can try it.
This is actually what I'm trying to do for a long time...
The reason you can't find the configuration file is because Google replaced the Bluetooth stack from BlueZ with a new stack built by Google and Broadcom. The new stack uses a different configuration file, which I don't know how to tinker with.
If you are serious about it, the closest thing I found to start with is the official introduction for the bluetooth framework on Android:
https://source.android.com/devices/bluetooth.html
There are lots of changes happened in Android OS over the time.
As of Android O (Android 8.*), sink profiles are partially supported by Google. Such as Audio sink will easily work if enabled in profile. It looks like the higher layer of BT is kind of implementation complete at the framework, which is in the form of App i.e. packages/app/Bluetooth (with some bugs but still works). But all profiles are not completed at framework lower layer via HAL interfaces which is btif framework (such as btif_rc.cpp, etc which you can look at Android source) and which is the replacement to older Bluez stack by Google.
As I said BT sink is partially implemented and is in work in progress state. BT sink such as Audio will easily work if enabled as sink profile but not all such as AVRCP will not work. At present, I saw the issue with AOSP code that incoming traffic from the remote device to Android works but outgoing traffic from Android to the remote device doesn't work (upon which AVRCP profile works) as remote device object is not maintained in the stack and so JNI calls from app/Bluetooth fails with null device at btif_*.cpp files. For example, send pass-through commands doesn't work.
So, we may see Bluetooth sink profiles working in future.
If you want to explore more check AOSP,
Services at packages/app/Bluetooth/
HAL's at system/bt/btif/
Related
I have been wondering, on how to capture Audio inputs through USB in Android.
My scenario is to receive audio through external hardware and play that received audio through android app. This transmission is to be done over USB.
Is there any way to do this using Android SDK / Android NDK.
Any suggestion will be helpful to me.
Task Done Right by time I am able to interact with Hardware using CDC class and also able to play some random noisy audio through USB in my app. Neither I am able to get clear sound by that approach, nor there is consistency within the transmission of audio.
Thanks.
Regards, Vivek
Most modern Android devices can act as USB host. So you can connect e.g. USB microphone for capturing the audio. Android also contains support for usb_audio class. Use that to get access to the audio on the device.
Since you have already experimented with Communication Device Class (CDC), you are aware of Android's USB host functionality. Now you need to ensure your peripheral has implemented USB audio class (the audio source part) and make your app to use the audio class to obtain the audio. This pretty well explained here, so it does not make sense to copy all the information to this post. If you are already using audio class, that page may explain some of the issues you have (e.g. using wrong format).
USB Audio class specifications can be found at USB.org website. The problem with those is that Audio class is pretty large and Android probably does not support everything.
I have a custom bluetooth wearable that essentially has two microphones, streams A2DP and handles AVRCP commands. I would like to connect this device to an Android phone and record audio either with AudioRecord or in native code. I need to do some signal processing on the data so I can't have it record directly to a file.
I have managed to build a version of Android using AOSP which will pair with the device and appears to receive the A2DP stream but I have not been able to build a version of the OS/SDK that allows me to use AudioRecord. I have gotten to the point where I can add a source for A2DP but I'm missing something at the native layer to fully make the connection.
Ideally, I would also like to control the input with AVRCP.
I am currently testing with a hammerhead device, but am will to move to another device if that works better.
Any input appreciated.
Is it possible to get available audio endpoints (earpiece, speakerphone, wired headset, bluetooth headset) via the openSL ES API for Android 4.3 ?
Or is all that stuff done at the Java level ?
Current situation is that I have implemented an openSL ES audio driver. The driver does nothing but receiving mic packets from the default mic and delivering speaker packets to the default speaker endpoint.
If possible, I would like to create a couple of extra functions in my code. One to inquire about endpoints and another one to set the endpoint.
Is it possible to get available audio endpoints (earpiece, speakerphone, wired headset, bluetooth headset) via the openSL ES API for Android 4.3 ?
You can give hints about how you want the audio to be routed, by using different audio stream types, and by using some of the AudioManager methods (like setBluetoothScoOn and setSpeakerphoneOn). But in the end it's up to the OEM to decide how to route the audio in any given situation.
Or is all that stuff done at the Java level ?
The routing policy is implemented at the native level. Where you find that code depends on which platform you're working with, but on many of Qualcomm's platforms you'll find it under hardware/qcom/audio/alsa_sound/.
I have a rooted HiSense GoogleTV which has HDMI IN and OUT ports.
What I want to do is to record about 10 secs of the audio from the HDMI IN (from the set-top box). I am new to this, so please bear with me.
Is this possible to do this on a rooted device?
Does the HDMI data get decrypted (due to HDCP) after the HDMI IN and re-encrypted before it is routed out via HDMI out?
If I were to try to capture the audio frames on a regular Linux box, how should I go about it? What components should I look into? I cannot find any documentation that describes the low level architecture and details on how the HDMI IN signal gets routed to HDMI OUT.
Can you please point to the Android framework code that actually does this routing from HDMI In to OUT? Basically, want to understand the flow of what happens to the audio signal during the transfer from the HDMI IN to the OUT.
I am not sure if my questions make sense, but I hope you can give me some pointers on where I should start.
Short answer: Not possible. The pass-through is completely isolated from android via the Trusted Video Path SOCs. You need to be a certified SOC provider to get anywhere near the signal.
A HDMI input device should be identified as AUDIO_DEVICE_IN_AUX_DIGITAL (see audio.h), though I've never come across an Android device with HDMI input so I can't verify that.
Audio routing is handled by the AudioPolicyManager. There's an AudioPolicyManagerBase in libhardware_legacy, and then there's typically a platform-specific AudioPolicyManager implementation which overloads some of the base class' methods. Where this implementation is found depends on the platform. On Qualcomm platforms it's usually found somewhere under hardware/qcom/audio in the source tree.
The AudioPolicyManager performs high-level routing (like mapping stream types and audio sources to audio devices), and then uses the AudioHardware implementation and possibly other platform-specific classes to do the low-level routing (manage audio streams at the hardware level, load acoustic tuning parameters, interface with device drivers, etc).
Any HDMI input-related functionality is likely to be vendor specific, so might need the full source code for your Google TV device (i.e. including all patches that the vendor has applied on top of vanilla Android) if you want to be able to look at the code that handles HDMI audio input.
You will not be able to access either the video or audio input since Google TV implements HDCP. The only way to change that, even on a rooted device, is to change the Google TV code and probably also the SOC HDMI drivers, neither of which have been open sourced by Google.
I am trying to access, programatically, the data received from 2 microphones on Android devices.
This arises several questions:
Are there shipping Android devices with 2 microphones (e.g. for stereo recording)? I know there are devices with 2 microphones for echo cancellation / noise reduction, but as far as I could find they can be accessed as a single microphone for any programatic purpose.
Are there devices with a microphone / headphone socket supporting stereo external microphones?
Assuming any of the above is positive, is there a way to know what is the currently operating microphone setup?
I will appreciate any response!
Thanks,
Yoav
I only found out that e.g. once you plug in wired headset with microphone it doesn't matter what AudioSource you specify in you code - it always give you the audio stream form headset mic. I tried to get access to internal mic using AudioSource.CAMCORDER but without luck. I haven't tried with wireless (BT) headset though. However if I plugin headphones (w/o mic) it uses internal microphone. At least this is the outcome on my SGS2 with ICS 4.0. If somebody find a workaround I would be happy to hear as well.
I haven't tried yet, but maybe the Native Developement Tools can allow you to access any microphone you want from low level.
If you want to make things a bit simpler, you could consider using OpenSL ES for Android, although i have no idea if it provides low-level microphone control.