I'm writing an Android application that needs to be able to seek to specific points in a large mp3 audio file (~90minutes) with good accuracy.
Currently, I'm using an OpenSL approach with an audio player object with a URI data source that specifies the mp3 file and MIME information.
To test this out, I use the SLSeekITF interface on the player to seek to specific points (specified in milliseconds). However, I find that the seeking performance is poor and inconsistent. Often the audio is 1-10 seconds off from where it should be. Sometimes ahead, sometimes behind. Performance is a little bit better using shorter mp3 files, but nowhere near close enough.
The Seek modes ("accurate" and "fast") don't seem to make any difference on SLSeekITF.
On other platforms, I can get the seek position to be very accurate < 50msec which is barely noticeable, so I know this is possible.
-Does anyone know how to get better accuracy out of the OpenSL audio player?
-Are there known issues with this implementation?
-Are there other mp3 decoders available that offer better performance?
Thanks
I also posted this question on the Google NDK Group:
https://groups.google.com/forum/#!topic/android-ndk/rzVr3A0DjBs
While I never got an official answer from anyone at Google, the feedback I received seem to indicate that Media Player and/or playing audio from a URI with OpenSL ES is known to be buggy.
I ended up solving this problem by using a 3rd party mp3 decoder with seeking ability and a Buffer Queue Audio Player object in OpenSL ES to playback the audio samples.
Hardly easy to do, but it works.
Related
I have 5 audio files with each having length 5 seconds. I wanted to play each sound file one by one but the condition is if an audio file playing next file should play after 4 seconds ie adjacent audio file sounds should overlap for 1 second. How can I implement it? Which is the best audio player you can suggest?
The amount of specific work you can do with audio playback on the Java side is pretty limited on android.
It sounds like you will need to mix your sounds at some point during their playback to overlap.
The best way to do this in my head is through a C++ library called Oboe (I am currently working with this). This is a library created by Google for audio playback. Now hold on now, let me explain! I know implementing C++ (especially if your only on the Java stack right now) can add a bit of time to your project.
The reason this came to mind is because in this way of playing audio (through Oboe/C++), you physically move individual bits of the audio sample through a buffer stream. The C++ libraries also actually have a Mixer class that you can put 2 different audio samples (up to 100 actually) into to mix, and then eventually render through the buffer stream.
Using this methodology, you can add specific logic to manage when your audio starts playing (after 4 seconds if adjacent). At which point you can mix the first second of the next clip with the current playing clip.
Now the exciting bit, is you may be able to replicate this process in Java! I found this post which may be of help to you:
Android: How to mix 2 audio files and reproduce them with soundPool
Now I do warn you, rendering audio in this way (through buffer streams) is a complicated process, and some extra research may be needed to fully understand the process. I can't say I know all of the functionality of the Java audio libraries, but I'm willing to bet they don't have much support for mixing sound in the way that you need. So most likely you will have to mix it yourself, or your last resort might be to use the NDK (C++).
Hopefully this answer helps. The best wishes in your research! Hopefully you will find a simple way that works. (If you do, don't forget to share your findings on this question!)
The MediaPlayer's seekTo() and getCurrentPostition() are working inaccurately and very approximately and this issue is being unsolved by Google for a long time.
I need a good library that can return a precise position of a playback in milliseconds and seek where it needed. But I've tried some like presto, vitamio, ExoPlayer (for this I can't find any documentation how to play from sd card) and yet didn't find a good library.
Using ffmpeg is complex for me and the only java wrapper I've found is only for decoding , not playback .
Please, give me advice how to playback audio and get exact values for getCurrentPostition() and seekTo()
Check out FFmpegMediaPlayer, it's a ffmpeg library for playback on Android:
https://github.com/wseemann/FFmpegMediaPlayer
I feel your pain, the built-in MediaPlayer seems to be a steaming pile of garbage with seeking (though I am seeking videos, not audio).
Also be aware that your encoding could be wrong -- double check that the length reported in the codec tags matches the length reported by MediaPlayer.
I need to be able to play two or more (let's say, up to 5) short ogg files simultaneously. And by simultaneously I mean in perfect synchrony. I am able to load them to SoundPool and play, but this sometimes creates a noticeable difference in playback start time, which I want to get rid of.
From my understanding this can be avoided if mixing PCMs into one buffer and playing. But OGG's are not PCMs and need to be somehow efficiently decoded before playing and latency must be very low, ideally as soon as user presses the button. So I figured I need a way to stream OGG into PCM and as I receive buffers I would mix them and feed to AudioTrack. My requirement is Android 2.3.3+, so I cannot use any new codecs provided in Jelly Bean.
Also although OGGs themselves are small, there is a lot of them. So keeping them all decoded in memory (SoundPool or some pre-decoding) may case problems too.
Can someone give me a tip where to dig? Can OpenSL ES do that for me? Or should I think about integrating ffmpeg? And is it even possible to stream simultaneus files with low latency?
Thanks
You can play sounds using AssetPlayers, but this sometimes creates a noticeable difference in playback start time, yeh...
So, i recomend to decode ogg using Ogg Vorbis (like here) and then using this PCM buffer for BufferPlayer.
Btw, check this OpenSL ES wrappers
https://github.com/Suvitruf/Android-ndk/tree/master/OpenSLES
I've read a lot of questions on stackoverflow and other pages about this topic, but didn't find a real up-to-date solution:
In an Android-App I've got two audio files (local file system), which are encoded in mp3, ogg or wav.
I just want to play them exactly synchronously, have seeking possibilities and control the volume of each single track. Using MediaPlayer this isn't possible because of the well known latency issues in Android.
So I think having two Audio-Player-Instances (of whatever library) will allways result in bad latencies, so it seems not to be the solution.
So in my opinion the only solution would be to mix together the audio inputs to a somewhat mixed input, which can be played by one Player. I read a lot about Androids AudioTrack and buffers and the OpenSL ES implementation, but allways ended with the notice: buffers only support PCM raw audio data. Ok, so I have to decode the mp3/ogg by myself?
My Question now is: Is there any library that can help me to a) do exactly what I want with a simple API or b) decode mp3/ogg to memory to use that data with AudioTrack or OpenSL?
If it's native or Java is unimportant, it just has to work.
The minimum API-Level is 15+ (Android 4.0.3, most current Version while creating this question).
I'm currently implementing a sound effect mixing on Android via OpenSL. I have an initial implementation going, but I've encountered some issues.
My implementation is as follows:
1) For each sound effect I create several AudioPlayer objects (one for each simultaneous sound) that uses an SLDataLocator_AndroidFD data source that in turn refers to an OGG file. For example, if I have a gun firing sound (lets call it gun.ogg) that is played in rapid succession, I use around 5 AudioPlayer objects that refer to the same gun.ogg audio source and also the same outputmix object.
2) When I need to play that sound effect, I search through all the AudioPlayer objects I created and find one that isn't currently in the SL_PLAYSTATE_PLAYING state and use it to play the effect.
3) Before playing a clip, I seek to the start of it using SLPlayItf::SetPosition.
This is working out alright so far, but there is some crackling noise that occurs when playing sounds in rapid succession. I read on the Android NDK newsgroup that OpenSL on Android has problems with switching data sources. Has anyone come across this issue?
I'm also wondering if anyone else seen or come up with a sound mixing approach for OpenSL on Android. If so, does your approach differ from mine? Any advice on the crackling noise?
I've scoured the internet for OpenSL documentation and example code, but there isn't much out there with regards to mixing (only loading which I've figured out already). Any help would be greatly appreciated.
This is probably not the best approach (creating many instances of audio players). Unfortunately the Android version (2.3) of OpenSL ES doesn't support SLDynamicSourceItf. Which would be similar to OpenAL's source binding interface. One approach would be to create multiple stream players. You would then search for a stream player that isn't currently playing and start streaming your sound effect to it from memory. It's not ideal but it's doable.
You should probably not use the ogg format for sound effects either. You're better off with WAV (PCM) as it won't need to be decoded.
Ogg is fine for streaming background music.