Has anyone built OpenAL for the Android, or found the shared library for it on the system? This seems like an obvious need for a game of any kind, yet there's no resources out there for it. It seems the Android java sound library can't do pitch changes from what I can tell, so there seems a need for OpenAL. I know OpenAL Soft can be built on top of ALSA, but I'm not sure if anyones done that, and I'm sure it would take me a month.
If there's a good guide somewhere on sound manipulation on the Android without OpenAL, that's fine too. It's just that OpenAL is sort of a standard for game makers and it would be nice to port my thousands of lines over to this system, which I sort of thought was the point of the NDK before I dugg into it and saw that there's almost no shared library access on the system.
Thanks.. I hope I can actually port without becoming a java expert myself. Really disliking the NDK so far!
A few options are available now for NDK audio:
It's not OpenAL, but OpenSL ES 1.0.1 is an official part of the NDK as of API level 9 (2.3). More information here.
OpenAL Soft has an OpenSL ES backend in its git master (not released as of version 1.13). It is however at this time broken on Android, as it is written for OpenSL ES 1.1, not 1.0.1. See this commit for a fix.
As mentioned in a previous answer, a JNI backend for OpenAL Soft is linked to and described here as the only option for OpenAL on pre-2.3 Android platforms. However, this is an outdated fork of OpenAL Soft - I've updated the backend to the latest version on a github repo here along with the OpenSL ES 1.0.1 fix. Also included is an untested optional patch that claims to provide better performance and less latency.
Just before Google announced that 3D audio is going to be included into Android 2.3, I managed to compile OpenAL for Android, and package it as shared object.
See http://pielot.org/2010/12/14/openal-on-android/
Might still be helpful if you'd like to target devices < 2.3.
Cheers.
OpenSL is planned for a future Android build; OpenAL isn't available, and the low-level hardware is off-limits to anything you can do in the NDK, so you can't safely build it yourself.
There's no support for low-latency audio even planned; there's a bug to that effect here:
http://code.google.com/p/android/issues/detail?id=3434
Star it if it's relevant to you; maybe Google will listen if it gets enough stars.
EDIT: There IS low latency audio in Android 4.0+, and OpenSL is available now. See this page and linked pages: http://source.android.com/devices/audio/latency.html Also see the NDK guide on OpenSL.
You can use the NDK to build OpenAL and package it with your APK. That way you can access it from your native code.
Related
I'm planning to run an app based on (GNOME) libclutter on Android 9 (Pie). I'm quite new to these graphics related stuff, been wondering on these things, so seeking for guidance/direction whatever data that could help me to understand this thing better.
As per the documentation of Android Graphics, Android is using OpenGL ES & Vulkane at low level to render objects. And as per GNOME Clutter documentation, it could be only compiled with mentioned back-end only! (Please check embedded link to for platforms details.)
I don't see OpenGL ES or Vulkane support, So am I missing something on understanding part or it can't be done?!
[Clutter maintainer, here]
Yes, Clutter supports OpenGL ES—it uses Cogl, a library that abstracts GL and GLES concepts.
No, Clutter does not support Vulkan at the moment.
No, Clutter and Cogl do not support Android; there was an experimental port but it has been abandoned in 2012.
Additionally, Clutter is in deep maintenance mode: no new development releases, no new features, and only minimal/security/crasher bug fixes are allowed.
I would not recommend using Clutter in a newly written project.
OKay...after spending few more hours, I was able to figure out an answer! (Yayy..!!!)
As per Clutter Project website: (somehow I had missed this info previously! :p)
Clutter uses OpenGL for rendering (and optionally OpenGL ES for use on mobile and embedded platforms), but wraps an easy to use, efficient, flexible API around GL's complexity.
So, as per my requirement, I should be able to integrate and cross-compile Clutter lib source and compile it.
PS: I will try to integrate & build the libclutter on android 9. Will update this answer latter with additional set of information.
==========================================================================
Update:
As pointed out by #ebassi in another answer, I have dropped the idea of integration and looking forward to directly using Android Graphics stack for the implementation.
Thanks #ebassi...!
I am trying to build the OpenGL SO lib from android sources (libGLESv2.so) and i would like a little bit more understanding of the internal mechanism of Android OpenGL ES and the flow.
Please correct me where i am wrong:
I know that in windows a developer includes gl.h and static link to OpenGL32(64).lib (which in turn dynamically link to OpenGL32.dll (probably there is a way to dynamic linke to OpenGL32.dll by the developer but that's not important).
The developer is exposed to the declaration of OpenGL API's but the implementation which i assume to be HW dependent.
The same scenario, Android: assuming developer import .opengl.GLES20 and calls the following method: GLES20.glTexEnvf(....
I would like to know what's going on behind the scenes in android (maybe Linux is better for an Android beginner).
the implementation which reside in opengl/java/android/opengl/GLES20.java source calls the native C function glTexEnvf which unlike windows we have it's implementation which reside in opengl/libagl.
Is it true?
In any case what is the GLES2_dbg library in /libs/GLES20_dbg? i can see there some kind of debug implementation with python scripts... are they to compile OpenGL debug version?
What are the .in files and gl2.cpp file in /libs/GLES20?
Where are the HW calls? does each GPU vendor sends his libGLESv2 implementation for HW calls as i saw the libGLESv2_adreno200.so in my xperia arc?
Please help me understand the flow. If you have a link which explain this structure even in Linux it will be great.
In Windows opengl32.dll contains both a software rasterizer fallback and so called trampolines into the OpenGL-ICD shipping with the GPU driver.
The opengl32.lib ist not really a library but a cross reference for the linker to add entries into the executable that make the OS dynamically link the program against the DLL at runtime.
On Linux in the current implementation the libGL.so ships with the graphics driver and contains the vendor specific implementations. The linkers used in *nix systems don't rely on an extra crossreferencing .lib but can take the information directly from the .so
On Android the libGLES you see is only a kind of placeholder to make linking possible. But ultimately the GPU vendor provides the proper library, which drops into the place where the phony libGLES resided.
The .in files are nothing special in particular. They are input files used by configure and build systems to build source files from a template (the .in file) with fields filled in by configuration values.
thanks for the quick answer, i did a little more digging and as i saw here:
Missing OpenGL drivers on Android emulator,
further explanation.
What i understand now is the libagl is pure SW implementation.
In that case the libhgl is actually the GPU vendor implementation.
I also understood that libEGL opens (found it in the code - Loader.cpp) the libGLESv2....
So i will ask 2 more question:
libGLESv2 only dynamically link to the HW lib or the libEGL does that? (found something on EGL - loader.cpp which seems like dynamically link OpenGL API's)
2.So when i call an OpenGL API i goes trough libEGL (since there is the dynamic binding)? and from there to libGLESv2 ?
thanks a lot for your help
it starting to make sense now
I am looking into clearing up my confusion on how to capture and render audio using native code on the Android platform. What I've heard is that theres an API for audio called OpenSL. Is there any recommended guides and tutorials on how to use it?
Also, is there any good audio wrappers for OpenSL, such as an OpenAL wrapper or something? I've developed the audio part with OpenAL on other platforms, so it would be nice to re-use the code.
Is there limitations to OpenSL - like, something that has to be done in Java code?
How much does OpenSL differ to OpenAL?
Thanks!
There's a native audio example included in the samples/ directory of recent ndk releases.
It claims to use OpenSL ES
OpenSL and OpenAL differ quick a bit in terms of interfaces. However, they do have a very similar pattern and the use case is similar too. One this to be aware of is that in the current implementation OpenSL suffers from the same latency issues the java audio apis have.
When using OpenSL you don't have to call any Java code. The latest NDK has support for a native asset manager so no more going through JNI to pass byte arrays around :)
I want to port an aplication written in C++ to android. Converting the application from C++ to Java will take a lot of work that I would prefer to use on making the application better for that platform instead of fixing convertion bugs and solving refactoring problems.
The NDK seems a good route to take but actually I don't want to miss a platform(if it is a considerable % of the market) just because the NDK doesn't or won't support it.
Android claims to support MIPS, ARM, X86 and others ... but actually all the implementations I have seen are only on ARM (or arm compatible).
I checked that on this site:
http://www.pdadb.net/
Would it be a bad desision to use the NDK?
Are there any non ARM devices that run or will run Android?
Where can I find more information about this?
Thanks in advance!
At this point the problem is not that you would not lose market share due to CPU architecture, as there are very few non-ARM Android devices at the moment, the problem is that you may lose market share due to requiring users to run Android 2.3 or later, which you would have to use in order to create a fully native application with access to the window, sensor, and input subsystems.
Avoiding rewriting code is a good goal but you would likely have to rewrite portions of the code anyway due to Android's dissimilar Window and life-cycle APIs. Now you would have to rewrite some important parts of the code in C++ rather than in Java.
You could try a hybrid approach where you write most of the UI in Java, then make calls to your existing C++ code.
Are you making a game? Then you'll probably want to deal with these issues and press on with the NDK. If not, try implementing as much of the program as possible in Java and use the NDK for the complex, tested parts of your code that need to be fast.
The documentation gives the following:
The latest release of the NDK supports these ARM instruction sets:
ARMv5TE (including Thumb-1 instructions)
ARMv7-A (including Thumb-2 and VFPv3-D16 instructions, with optional support for NEON/VFPv3-D32 instructions)
Future releases of the NDK will also support:
x86 instructions (see CPU-ARCH-ABIS.HTML for more information)
Would it be a bad desision to use the NDK?
For algorithms, the NDK is fine. For games, the NDK is fine. For implementing an ordinary app, the NDK will not be terribly helpful.
Are there any non ARM devices that run or will run Android?
Google TV runs on x86 (Atom).
Use this improved NDK: http://developer.mips.com/android/download-android-ndk/
I'm looking for a way to decode AAC natively to PCM on Android. The decoder source code is at https://android.googlesource.com/platform/external/opencore/+/master/codecs_v2/audio/aac/dec, but I'm not familiar with NDK at all.
1) There's no way of doing this directly using the Android SDK, but can this be done via the NDK?
2) I would especially be interested in a simple way of accessing the decoder from SDK, with a short "bridge" through the NDK. Is this feasible?
3) Would such a solution work all Android versions (1.5-2.2)?
4) I guess I could use http://code.google.com/p/aacplayer-android/ instead, but it looks like this implementation is fairly CPU intensive. Does anyone have experiences with this?
Not sure what the policy is here for answering really old questions but what is working well for me is using OpenSL with the NDK; it comes built in and in fact the NDK comes with an example "native-audio" that demonstrates what you need.
One thing you may look into is the FFMpeg stuff, it is GPL and TuneIn radio posted their mods here: http://radiotime.com/mobile/android#/support/open-source