I try to record audio using android ndk. people say I can use "frameworks/base/media/libmedia/AudioRecord.cpp". but it is in kernel. how can I access and use it?
The C++ libmedia library is not part of the public API. Some people use it, but this is highly discouraged because it might break on some devices and/or in a future Android release.
I developed an audio recording app, and trust me, audio support is very inconsistent across devices, it's very tricky, so IMO using libmedia directly is a bad idea.
The only way to capture raw audio with the public API is to use the Java AudioRecord class. It will gives you PCM data, which you can then choose to pass to your C optimized routines.
Alternatively, although that's a bit harder, you could write a C/C++ wrapper around the Java AudioRecord class, as it is possible to instantiate Java objects and call methods through JNI.
May be a little-bit outdated but:
the safest way of playing/recording audio in native code is by using OpenSL ES interfaces.
Nevertheless it's available only on android 2.3+ and for now works over generic AudioFlinger API.
the more robust and simple way is using platform source-codes to get AudioFlinger headers and some generic libmedia.so used for linking on the build stage.
Device-dependent libmedia.so should be preloaded at application initialization stage for AudioFlinger to work normally (generally it is done automatically). Take a note that some vendors try changing AudioFlinger internals (by ambiguous reasons), so you may encounter some memory or behavior issues.
In my experience AudioFlinger worked on all (2.0+) devices but sometimes required allocating more memory for the object than it was supposed by default implementation.
Finally saying OpenSL ES is a wrapper with dynamically loadable C-interface which allows using it with any particular AudioFlinger implementation. It is pretty complicated for simple usage, and may have even more overhead than using Java AudioTrack/AudioRecord because of internal threading, buffering, etc..
So consider using Java or not-so-safe native AudioFlinger until Google implements some high-performance audio interface (which is doubtful for now).
The OpenSL Es API is available from Android 2.3 on (API-Level 9)
Related
I am using ffmpeg to decode a file and play it back on an android device. I have this working and would now like to decode two streams at the same time. I have read some comments regarding needing to use av_lockmgr_register() call with ffmpeg, unfortunately I am not sure how to use these and how the flow would work when using these locks.
Currently I have seperate threads on the java side making requests through JNI to native code that is communicating with ffmpeg.
Do the threads need to be on the native(NDK) side, or can I manage them on the java side? And do I need to do any locking, and if so how does that work with ffmpeg?
***UPDATE
I have this working now, it appears that setting up the threads at the java sdk level transfers into separate threads at the native level. With that I was able to create a struct with my variables, and then pass a variable to the native layer to specify what struct to use for each video. So for I have needed to use any mutexs or locks at the native level, and haven't had any issues.
Does anyone know of potential gotchas I may encounter by not doing so with ffmpeg?
I'll answer this, my latest update approach appears to be working. By controlling the threads from the java layer and making my native calls on separate threads everything is working and I have not encountered any issues.
Since Android API 12, RTP is supported in the SDK, which includes RtpStream as the base class, and AudioStream, AudioCodec, and AudioGroup. However, there is no documentation, examples, or tutorials to help me use these specific APIs to take input from the device's microphone, and output it to an RTP stream.
Where do I specify using the mic as the source, and not to use a speaker? Does it perform any RTCP? Can I extend the RtpStream base class to create my own VideoStream class (ideally I would like to use these for video streaming too)?
Any help out there on these new(ish) APIs please?
Unfortunately these APIs are the thinnest necessary wrapper around native code that performs the actual work. This means that they cannot be extended in java, and to extend them in C++ you would have to have a custom Android version I believe.
As far as I can see the AudioGroup cannot actually be set to not output sound.
I don't believe it does an RTCP but my use of it doesn't involve RTCP so I would not know.
My advice is that if you want to be able to extend functionality or have greater flexibility, then you should find a C or C++ native library that someone has written or ported to Android and use that instead, this should allow you to control what audio it uses and add video streaming and other such extensions.
I am looking into clearing up my confusion on how to capture and render audio using native code on the Android platform. What I've heard is that theres an API for audio called OpenSL. Is there any recommended guides and tutorials on how to use it?
Also, is there any good audio wrappers for OpenSL, such as an OpenAL wrapper or something? I've developed the audio part with OpenAL on other platforms, so it would be nice to re-use the code.
Is there limitations to OpenSL - like, something that has to be done in Java code?
How much does OpenSL differ to OpenAL?
Thanks!
There's a native audio example included in the samples/ directory of recent ndk releases.
It claims to use OpenSL ES
OpenSL and OpenAL differ quick a bit in terms of interfaces. However, they do have a very similar pattern and the use case is similar too. One this to be aware of is that in the current implementation OpenSL suffers from the same latency issues the java audio apis have.
When using OpenSL you don't have to call any Java code. The latest NDK has support for a native asset manager so no more going through JNI to pass byte arrays around :)
I am writting an app which needs to decode H.264(AVC) bitstream. I find there are AVC codec sources exist in /frameworks/base/media/libstagefright/codecs/avc, does anyone know how can one get access to those codecs in an Android app? I guess it's through JNI, but not clear about how this can be done.
After some investigation I think one approach is to create my own classes and JNI interfaces in the Android source to enable using the CODECS in an Android App.
Another way which does not require any changes in Android source is to include CODECS as shared library in my application, use NDK. Any thoughts on these? Which way is better(if feasible)?
I didn't find much information about Stagefright, it would be great if anyone can point out some? I am developing on Android 2.3.3.
Any comments are highly appreciated.Thanks!
Stagefright does not support elementary H.264 decoding. However it does have H.264 decoder component. In theory, this library could be used. But in reality, it will be tough to use it as a standalone library due to its dependencies.
Best approach would be use to JNI wrapped independent h.264 decoder(like the one available with ffmpeg).
Is there a way to access StageFright API's to decode JPEG image from application layer on Android 2.3?
No, Stagefright APIs are not exposed at Android Application Framework level. Android Media Player class abstracts the internal player frameworks like StageFright and OpenCore.
If you have the source code for Android, then you can use JPEG decoder present in StageFright (probably wrapped as OMX Component) through JNI.
No, it is not possible to use stagefright apis directly from the app layer. You will have to go through the java apis. But yes, if you are ready to write the jni layers hack around a LOT of code, it is possible in principle