Implementing a custom audio HAL in android - android

How to implementing a custom audio HAL in android?
What are the different approaches to implement the audio HAL in android Ex. TinyHAL, UCM etc. I am looking for various approaches to suite my requirement.

First and the most important question you should ask yourself is if you really need to implement your own HAL.
Secondly, there is a mandatory reading - Audio Architecture in Android.
Next, check the source code e.g. for one of the Nexus devices, this can be treated as a reference design.
After lots of code writing and lots of debugging pass CTS testsuit, to make sure you are compliant with the Android OS.

Related

Integrating any codec within an android VOIP application

I am working on an android voip application that need not work on PSTN. I am completely novice to this field and any little help will be appreciated.
I started by researching how whatsapp voice call works and found out that it is using PJSIP which is open source sip stack library(Source: What's up with WhatsApp and WebRTC? - webrtcHacks). I also found that codecs are used in voip to compress and then decompress the voip packets.
Knowing that I am extremely comfused betweet those sip libraries and codec. Do an android voip app have to have implement sip library? Every sip library supports a few codec.
Is there any general format by which I can integrate any codec within my android app whether it is OPUS or Speex or anything like that which is independent of sip implementation?
May be I am sounding too confusing but that is true. Even googling so much on this specific topic did not help me and my last stop is this community. Any little guidance will be appreciated.
Yes, usually every app implements the codecs on their own. Some codec is available in the Android SDK but even in these cases a proper implementation is better.
G.711 (PCMU and PCMA) are very simple which can be implemented within a single java class (or even in a single function if you wish). The others are more complicated, but you can find open source implementations for almost each of them.
Also note that codec's are implemented also within PJSIP, so if you are using this library then you already have the most popular codec's available.

Any AFSK libraries for Android? (for data input via Android headphone jack)

I'm trying to implement a audio jack data interface using AFSK and a micro-controller.
Through searches I've seen a couple implementations that use iPhones, such as this:
http://www.creativedistraction.com/demos/sensor-data-to-iphone-through-the-headphone-jack-using-arduino/comment-page-1/#comment-243826
There they used "Perceptive Development’s SerialModem for iPhone", although that seems to contain a hex file and a circuit schematic?
I haven't been able to find anything by searching for "AFSK Android library", "FSK android library" or various other combinations of that. Does anyone know of a good source for these kinds of tools for Android?
Alternatively, is there a library that implements the simplified FFT that you could use to demodulate the data? Naturally you don't want to do a full FFT because you're just trying to distinguish between
(Ideas drawn from here: http://labs.perceptdev.com/how-to-talk-to-tin-can/) but I'm sure there's something like
I looked into spandsp, http://www.soft-switch.org/ , looking for more general DSP libraries. Not sure if these can be used on Android though.
Thanks for your help

how can we implement CAN Application Layer(CAL) protocol in iphone/android

I need to interface a device which is supporting CANBus ,So for communication with that I need to follow CAL,So can any one help that ho can I implement
CAN Application Layer(CAL) protocol in iphone/android .
Please help i am not getting any way to solve it
"I need to interface a device which is supporting CANBus ,So for
communication with that I need to follow CAL"
The second part of that statement doesn't follow necessarily from the first. There are plenty of devices and systems that communicate via a CAN bus that don't use a formal higher level application framework.
First, you need to be able to communicate with the can bus from your application. Your mentioning iphones suggests you'll be targeting consumer handsets, none of which will have a CAN interface. So you need to incorporate some adapter hardware (there are usb adapters, and android at least has usb hardware access baked into the SDK).
If you do then also need to communicate with components that implement a higher level application framework like CANopen on top of the CAN layer, your options are:
Get your hands on the specification from whatever group maintains
it, and implement it in your language and framework of choice. This is likely a substantial effort.
Purchase or find an open source implementation. If you purchase the source code for a C implementation, you can compile it into a shared library for your target architecture, and, using android as an example, write a native wrapper for that shared library using the Android NDK to expose it to your java code. If you could purchase the source code for a java implementation, you might be able to port it so that it works natively on android.
Then you need to glue the data layer together with the application layer, and this will likely be custom development no matter what.
You need the hardware to support it. I've found Gwentech's GT1026 to work well for can bus to android, but it only works on Android using USB.

Android example use of RtpStream

Since Android API 12, RTP is supported in the SDK, which includes RtpStream as the base class, and AudioStream, AudioCodec, and AudioGroup. However, there is no documentation, examples, or tutorials to help me use these specific APIs to take input from the device's microphone, and output it to an RTP stream.
Where do I specify using the mic as the source, and not to use a speaker? Does it perform any RTCP? Can I extend the RtpStream base class to create my own VideoStream class (ideally I would like to use these for video streaming too)?
Any help out there on these new(ish) APIs please?
Unfortunately these APIs are the thinnest necessary wrapper around native code that performs the actual work. This means that they cannot be extended in java, and to extend them in C++ you would have to have a custom Android version I believe.
As far as I can see the AudioGroup cannot actually be set to not output sound.
I don't believe it does an RTCP but my use of it doesn't involve RTCP so I would not know.
My advice is that if you want to be able to extend functionality or have greater flexibility, then you should find a C or C++ native library that someone has written or ported to Android and use that instead, this should allow you to control what audio it uses and add video streaming and other such extensions.

How to synthesize sounds of instruments on Android (Piano, Drums, Guitar, etc...)

Can somebody give me some direction on how to synthesize sounds of instruments (Piano, Drums, Guitar, etc...)
I am not even sure what to look for.
Thanks
Not sure if this is still the case but Android seems to have latency issues that inhibit it from being able to do true sound synthesis. NanoStudio, in my opinion, is the best audio app on the iOS and the author so far refuses to make an Android version because the framework isn't there yet.
See these links:
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=nanostudio+android#hl=en&q=+site:forums.blipinteractive.co.uk+nanostudio+android&bav=on.2,or.r_gc.r_pw.&fp=ee1cd411508a9e34&biw=1194&bih=939
It all depends on what kind of application you're making, if it's going to be a Akai APC firing off sounds you could be alright. If you're after true synthesis (crafting wave forms so they replicate pianos, guitars, and drums), which is what JASS mentioned above does, then Android might not be able to handle it.
If you're looking for a guide on emulating organic instruments via synthesis check out the books by Fred Welsh http://www.synthesizer-cookbook.com/
Synthesizing a guitar, piano, or natural drums would be difficult. Triggering samples that you pass through a synthesis engine less so. If you want to synthesize analog synth sounds that's easier.
Here is a project out there you might be able to grab code from:
https://sites.google.com/site/androidsynthesizer/
In the end if you want to create a full synthesizer or multi-track application you'll have to render your oscillators + filters, etc into an audio stream that can be piped into the MediaPlayer. You don't necessarily need MIDI to do that.
Here is one persons experience:
http://jazarimusic.com/2011/06/audio-on-android-a-developers-perspective/
Interesting read.
Two projects that might be worth looking at JASS (Java Audio Synthesis System) and PureData . PureData is quite interesting though probably the harder path.
MIDI support on Android sucks. (So does audio support in general, but that's another story.) There's an interesting blog post here that discusses the (lack of) MIDI capabilities on Android. Here's what he did to work around some of the limitations:
Personally I solved the dynamic midi generation issue as follows: programmatically generate a midi file, write it to the device storage, initiate a mediaplayer with the file and let it play. This is fast enough if you just need to play a dynamic midi sound. I doubt it’s useful for creating user controlled midi stuff like sequencers, but for other cases it’s great.
Android unfortunately took out MIDI support in the official Java SDK.
That is, you cannot play audio streams directly. You must use the provided MediaStream classes.
You will have to use some DSP (digital signal processing) knowledge and the NDK in order to do this.
I would not be surprised if there was a general package (not necessarily for Android) to allow you to do this.
I hope this pointed you in the right direction!

Categories

Resources