Capturing audio frames from HDMI IN port - android

I have a rooted HiSense GoogleTV which has HDMI IN and OUT ports.
What I want to do is to record about 10 secs of the audio from the HDMI IN (from the set-top box). I am new to this, so please bear with me.
Is this possible to do this on a rooted device?
Does the HDMI data get decrypted (due to HDCP) after the HDMI IN and re-encrypted before it is routed out via HDMI out?
If I were to try to capture the audio frames on a regular Linux box, how should I go about it? What components should I look into? I cannot find any documentation that describes the low level architecture and details on how the HDMI IN signal gets routed to HDMI OUT.
Can you please point to the Android framework code that actually does this routing from HDMI In to OUT? Basically, want to understand the flow of what happens to the audio signal during the transfer from the HDMI IN to the OUT.
I am not sure if my questions make sense, but I hope you can give me some pointers on where I should start.

Short answer: Not possible. The pass-through is completely isolated from android via the Trusted Video Path SOCs. You need to be a certified SOC provider to get anywhere near the signal.

A HDMI input device should be identified as AUDIO_DEVICE_IN_AUX_DIGITAL (see audio.h), though I've never come across an Android device with HDMI input so I can't verify that.
Audio routing is handled by the AudioPolicyManager. There's an AudioPolicyManagerBase in libhardware_legacy, and then there's typically a platform-specific AudioPolicyManager implementation which overloads some of the base class' methods. Where this implementation is found depends on the platform. On Qualcomm platforms it's usually found somewhere under hardware/qcom/audio in the source tree.
The AudioPolicyManager performs high-level routing (like mapping stream types and audio sources to audio devices), and then uses the AudioHardware implementation and possibly other platform-specific classes to do the low-level routing (manage audio streams at the hardware level, load acoustic tuning parameters, interface with device drivers, etc).
Any HDMI input-related functionality is likely to be vendor specific, so might need the full source code for your Google TV device (i.e. including all patches that the vendor has applied on top of vanilla Android) if you want to be able to look at the code that handles HDMI audio input.

You will not be able to access either the video or audio input since Google TV implements HDCP. The only way to change that, even on a rooted device, is to change the Google TV code and probably also the SOC HDMI drivers, neither of which have been open sourced by Google.

Related

Audio input through USB in Android

I have been wondering, on how to capture Audio inputs through USB in Android.
My scenario is to receive audio through external hardware and play that received audio through android app. This transmission is to be done over USB.
Is there any way to do this using Android SDK / Android NDK.
Any suggestion will be helpful to me.
Task Done Right by time I am able to interact with Hardware using CDC class and also able to play some random noisy audio through USB in my app. Neither I am able to get clear sound by that approach, nor there is consistency within the transmission of audio.
Thanks.
Regards, Vivek
Most modern Android devices can act as USB host. So you can connect e.g. USB microphone for capturing the audio. Android also contains support for usb_audio class. Use that to get access to the audio on the device.
Since you have already experimented with Communication Device Class (CDC), you are aware of Android's USB host functionality. Now you need to ensure your peripheral has implemented USB audio class (the audio source part) and make your app to use the audio class to obtain the audio. This pretty well explained here, so it does not make sense to copy all the information to this post. If you are already using audio class, that page may explain some of the issues you have (e.g. using wrong format).
USB Audio class specifications can be found at USB.org website. The problem with those is that Audio class is pretty large and Android probably does not support everything.

How can I process audio in real time using an USB guitar interface in android?

I want to learn how to process audio input in real time in Android, so I want to make an experimental app that do simple audio manipulation of a guitar connected to the Android device, implementing a simple overdrive effect just to learn.
I know that it's posible to capture audio using the AudioRecord class when using a normal audio input like the built in device mic, or using an TRRS 3.5mm Jack cable, but in most phones the latency is high, I suppose that some guitar tuners apps works in this way, but I'm wondering how AndRig, DrAmp and usbEffects apps works using an external usb guitar interface like the Behringer
UCG102 to achieve a very low latency audio processing.
So, I want to use this usb guitar audio interface in my project to achieve low latency.
I searched for examples using this kind of guitar interfaces and I didn't find any tutorial or library to use it, so...
Do this Guitar to USB interface can be used like a normal input when connected to the device (So I can use the AudioRecord class like in a normal input)?
Do you know if exists some documentation or a tutorial to work with this kind of audio interfaces?
Or do I need to learn the Guitar to USB interface specific protocol and use low level USB API programming to do what I want?
The easiest way is probably buying this. There is also an API for android that looks a lot like LibUsb, you could try that to read the Usb endpoint as raw data (not sure if it would work out for your device though).
Another option is to make your own hardware (buy a microcontroller with a good ADC), and use [bulk transfers](https://developer.android.com/reference/android/hardware/usb/UsbDeviceConnection.html#bulkTransfer(android.hardware.usb.UsbEndpoint, byte[], int, int)) (usb communication protocol for lots of data).

Play music via android speakers

I want to make an app that makes it possible to connect an iPod or mp3 player to my Android device and let the Android speakers function as external speakers.
The ideal situation would be to actually read from speaker output so I can connect a stereo mini jack cable.
Is it possible to read from the headset output with the Android SDK?
A second option would be to use a mono mini jack instead. I could maybe directly read from microphone and output as a music player. Although, having to use a mono mini jack would be a huge disadvantage, because most people don't own such a cable.
UPDATE
For my second option I found this link that would let me take a special adapter onto a stereo cable so the iPod output can go into the mic input. It's a TRRS adapter. This works, but still isn't the ideal solution to me. http://www.techlife.net/2012/12/add-an-audio-input-to-android.html
ANOTHER UPDATE
I did a test with only a mono cable, but it seems that the mic is not recognized, so I really need the TRRS adapter to make sure that the mic is on. I found some apps that can help me with measuring input volume. I think I can achieve my goal for myself with the adapter, but reading from headset output would be nicer and could actually result in building an app.
You need to understand some basic things...
Audio output lets you "take audio out of your device".
It's not audio input that would let you "insert audio signal into your device".
So the concept that you've presented cannot work, because this socket is not able to receive audio signal through normal stereo jack cable (and connector).
You could try to make it work with a device that supports the headphones/mic set (it's a different kind of 3.5 mm jack connector). It's so called TRRS (four-conductor). But to use it in your project you probably would need some cable/socket soldering and maybe even some sort of microporcessor to help processing the signals.

How To Modify Android's Bluetooth Stack to Enable A2dp Sink

I'm working on an audio recorder app that uses a bluetooth mic to record audio on to an Android device (Nexus 7 - rooted Android 4.4.2). It's currently implemented on HFP and everything is working fine. The bluetooth mic is implemented with Bluegiga's WT32 bluetooth module + a mic input, audio quality via HFP isn't great but it's sufficient for now.
However, I'm now trying to change the bluetooth profile to A2dp, since there are two mic inputs (L/R) and WT32 supports A2dp (source). After much research I found that stock Android doesn't support A2dp (sink), and it's possible to modify Android's bluetooth stack to enable A2dp (sink).
What I don't understand is how does one access and modify the bluetooth stack. It would be nice if someone with an answer is able to break-down the steps to achieve this.
I've tried following the answer to this question:
Receive audio via Bluetooth in Android, yet I can't seem to find the appropriate file to modify. Actually, I don't even know if I'm looking into the right folder. I've looked through the devices file via Android-studio's DDMS-File Explorer.
ps, I'm still fairly new with Android app development, so I may have misused some of the terminologies and I apologies in-advance for that.
So the above answer isn't totally correct.
The following is how it breaks down:
HAL, is the hardware abstraction layer that implements the actual Bluetooth state machines in c/cpp code, as such it controls the various state machines for A2dp, HFP, GATT, SPP, AVRCP, etc. services. Each of these services also reference SMP and ATT files for controlling the actual Bluetooth server or client databases, and there security.
HCI, is where the actual work gets done. the HAL doesn't really do anything, it assembles the complex data messages that get sent along a tty serial (either spi, or UART) to an inter-network connected chip on the PCBA via the methods used in the HCI layer, which can be found in the "BTE" layer in /external/bluetooth/bluedroid/ directory of an android compiling trunk from AOSP 4.2.2 to current. - currently there are several manufacturers of these chips but they are mostly all Broadcom based ic's packaged in a dual, or triple radio package that contains a wifi, Bluetooth 4.0 Smart, and Bluetooth 4.0 radio.
It is possible to do what you are trying to do, but you would need to include hardware.so, and bluetooth_jni.so into an NDK/JNI package/project that goes with your apps, and registers via the calls from the .cpp files for each of the Bluetooth services found in "Packages/apps/Bluetooth/jni", you would then handle the registration in your NDK library of 'com_android_bluetooth_a2dp.cpp', and 'com_android_bluetooth_avrcp.cpp', as their appropriately typed objects.
The other issue is, you will need to implement your own custom A2DP stack, as the Android Bluedroid stack only has bit's and piece's of the Sink role implemented in the frameworks while the A2DP role has a full implementation of Source role. Additionally, depending what you actually intend to do with your Bluetooth A2DP sink implementation you will need to implement AVRCP as well - as per the Bluetooth SIG (special interest group), there are inter-connectivity requirements between Bluetooth devices that will lead to major issues if you implement sink role, without AVRCP "remote control target device" and "remote control control device", as the sink role ATT commands from Bluetooth over A2DP (or any Bluetooth service/profile) execute certain handshakes during the service discovery process, when the associated gateway ( the connecting device ), executes a capabilities request the A2DP service is expected to implement i/o capabilities for Start Stop commands, and possibly skip/track advance commands.
Additional to all of this, when implementing A2DP you will need to choose whether you will be handling PCM streams or AAC streams. If you are handling AAC streams (or DRM protected PCM streams for that matter, which anything like Pandora, spotify, etc. uses), you need to implement the SBC Encoder or Decoder appropriate to you implementation, else all you will have is a bunch of encrypted data. Also, be sure to implement the bitrate at the appropriate speed for your devices AudioManager implementation, some phones use 48,000hZ and some us 44,100hZ, this is important if you want high quality audio as generally most PCM A2DP implementations that are utilizing Surround Sound 7.1+ will require 48,000hZ as well as AAC encoding/decoding.
I hope this provides you with some insight.
https://android-review.googlesource.com/#/c/98161/ implements A2DP Sink. It works on Nexus 5. You can try it.
This is actually what I'm trying to do for a long time...
The reason you can't find the configuration file is because Google replaced the Bluetooth stack from BlueZ with a new stack built by Google and Broadcom. The new stack uses a different configuration file, which I don't know how to tinker with.
If you are serious about it, the closest thing I found to start with is the official introduction for the bluetooth framework on Android:
https://source.android.com/devices/bluetooth.html
There are lots of changes happened in Android OS over the time.
As of Android O (Android 8.*), sink profiles are partially supported by Google. Such as Audio sink will easily work if enabled in profile. It looks like the higher layer of BT is kind of implementation complete at the framework, which is in the form of App i.e. packages/app/Bluetooth (with some bugs but still works). But all profiles are not completed at framework lower layer via HAL interfaces which is btif framework (such as btif_rc.cpp, etc which you can look at Android source) and which is the replacement to older Bluez stack by Google.
As I said BT sink is partially implemented and is in work in progress state. BT sink such as Audio will easily work if enabled as sink profile but not all such as AVRCP will not work. At present, I saw the issue with AOSP code that incoming traffic from the remote device to Android works but outgoing traffic from Android to the remote device doesn't work (upon which AVRCP profile works) as remote device object is not maintained in the stack and so JNI calls from app/Bluetooth fails with null device at btif_*.cpp files. For example, send pass-through commands doesn't work.
So, we may see Bluetooth sink profiles working in future.
If you want to explore more check AOSP,
Services at packages/app/Bluetooth/
HAL's at system/bt/btif/

Is it possible to access data / record from two (2) microphones in Android devices?

I am trying to access, programatically, the data received from 2 microphones on Android devices.
This arises several questions:
Are there shipping Android devices with 2 microphones (e.g. for stereo recording)? I know there are devices with 2 microphones for echo cancellation / noise reduction, but as far as I could find they can be accessed as a single microphone for any programatic purpose.
Are there devices with a microphone / headphone socket supporting stereo external microphones?
Assuming any of the above is positive, is there a way to know what is the currently operating microphone setup?
I will appreciate any response!
Thanks,
Yoav
I only found out that e.g. once you plug in wired headset with microphone it doesn't matter what AudioSource you specify in you code - it always give you the audio stream form headset mic. I tried to get access to internal mic using AudioSource.CAMCORDER but without luck. I haven't tried with wireless (BT) headset though. However if I plugin headphones (w/o mic) it uses internal microphone. At least this is the outcome on my SGS2 with ICS 4.0. If somebody find a workaround I would be happy to hear as well.
I haven't tried yet, but maybe the Native Developement Tools can allow you to access any microphone you want from low level.
If you want to make things a bit simpler, you could consider using OpenSL ES for Android, although i have no idea if it provides low-level microphone control.

Categories

Resources