How to loop an audio file in android without any delay? - android

I have started a android.media.MediaPlayer file with:
mp1.start()
and then trying the looping with:
setLooping(true);
but this is ending up with a delay in playing the file again.
I am trying to run an mp3 file containing a rhythm with a set tempo. Is there any better way of looping it in such a manner that the tempo timing does not get disturbed and the rhythm plays seamlessly without any stutter/delay?
Should I use SoundPool instead?

Most of best practices for this particular case recommend using .ogg format. You can convert you file easily using VNC media player.
Wiki for .ogg file format - http://en.wikipedia.org/wiki/.ogg
Another solution is the SoundPool and the third one - is to use Audacity and cut the quiet/“blanksound” from you audio file.

If your audio is not long, then use SoundPool for low-latency media playback, instead of MediaPlayer. Also convert it to ogg, as others have already pointed it out.
Edit: if it is just a tempo, and not a continous sound, then maybe you can also measure the latency and seek your audio based on that, but I am not sure you will get better results this way.

Mediaplayer solutions:
If you insist on using MediaPlayer, then you can:
either crop the sound at the end of your audio files, so there's no sound gap between two playback loops
or create a custom solution yourself as the one described here.
Soundpool alternative:
Now, from my personal experience, if you want to loop files small in size and duration, not more than 1MB, then Soundpool is more convenient and it seems that not any relevant problems are reported in contrary to the MediaPlayer. There have been many complaints when trying to loop sounds using MediaPlayer, so generally Soundpool is usually preferred for looping.
Soundpool size limit:
If you are concerned about Sounpool's size limit, keep in mind that it has 1Mb buffer size limit per track. But this limit applies not to file size but to decompressed raw PCM data. SoundPool is going to decompress the loaded audio to PCM data so it is ready to play instantly without the latency of decoding. If the audio you are loading is compressed heavily, such as MP3, then that can get blown up quite a bit.
Improve performance:
Also, as suggested in another answer, files of type ".ogg" according to many sources appear to perform better than ".mp3" in general. So, you should try to convert your files for better performance, but I don't think you will see an improvement concerning looping.
To convert your files you can use an online tool such as this. If you convert your files remember to also make these changes:
Change your sound file's sampling rate to 16000 Hz
Change your audio channel to mono, instead of stereo.
Make sure your file after these processes is smaller than 1 mb in size.

Please try to do it this way.
audio = MediaPlayer.create(this, R.raw.pl);
audio.setLooping(true);
audio.start();

Related

mediaPlayer.getCurrentPosition() > mediaPlayer.getDuration() at the end of playing mp3 file

I'm playing mp3 file streamed from the network in my application, some mp3 files has weird behavior: mediaPlayer.getCurrentPosition() is larger than mediaPlayer.getDuration() at the end, for about 3 seconds.
The mp3 files are CBR encoded.
What might be the reason of this?
Finally solved the problem by converting the mp3 files, this is the command I'm using:
lame --mp3input -t -m s -b 128 --cbr input.mp3 output.mp3
There is a few reasons you can get this behavior.
First it appears that people had better results using mp3 files at exactly 44100Hz, because apparently the MediaPlayer class is assuming this value and scale the time accordingly, making strange values for files not using this sampling.
You also need to check the mode of your channels, and try using Joint Stereo or forced L/R Stereo. Joint should be the default, but your files might have been previously bad encoded, so it's worth trying. It's interesting to note that Forced L/R Stereo might loose quality for the same bitrate as Joint.
It would also be useful to check the output of soxi which is part of the sox package (you can also do it with ffmpeg), that will give you the number of channels, Sample rate, Bit Rate and Number of Channels.
Also you might want to check the raw content of the mp3 file if you did some treatment on them using any app for the presence of garbage xml content that might have been inserted during the export.
If you have the possibility to modify the mp3 files you're streaming, (which sounds like you do since you can tell the bitrate) these are what I would try first. If it's more like user-upload kind of stuff, maybe you should have a look to another solution instead, like ExoPlayer which has a few thousands stars and active development. It wraps the MediaPlayer api still, but worth a try.
You also have to consider that it might be a threading problem, where the player would stop playing, but the timer would actually keep going, giving you this result where it's superior to the actual duration of the song. 3 seconds seems a bit too much to explain it by that, but that's just a thought.

Using Stagefright to stream small audio files in Android

I have a lot of small .ogg sound files (average size of ~35-50KB) that I need to have played from my AWS S3, and then once played, they need to be cached on the device. There are approximately 200 sounds, and that is 1/10th of what the finished application will use.
I'm not sure the Stagefright library is my best bet, or if an entirely different approach is needed. Should I use Stagefright, or go with another option???
From your query, I feel that you could use SoundPool for your application. Please refer to this link SoundPool for more information.
A couple of examples can be found in MediaActionSound.java, SoundPoolTest.java and Soundclips.java.

Streaming multiple OGG simultaneously in Android

I need to be able to play two or more (let's say, up to 5) short ogg files simultaneously. And by simultaneously I mean in perfect synchrony. I am able to load them to SoundPool and play, but this sometimes creates a noticeable difference in playback start time, which I want to get rid of.
From my understanding this can be avoided if mixing PCMs into one buffer and playing. But OGG's are not PCMs and need to be somehow efficiently decoded before playing and latency must be very low, ideally as soon as user presses the button. So I figured I need a way to stream OGG into PCM and as I receive buffers I would mix them and feed to AudioTrack. My requirement is Android 2.3.3+, so I cannot use any new codecs provided in Jelly Bean.
Also although OGGs themselves are small, there is a lot of them. So keeping them all decoded in memory (SoundPool or some pre-decoding) may case problems too.
Can someone give me a tip where to dig? Can OpenSL ES do that for me? Or should I think about integrating ffmpeg? And is it even possible to stream simultaneus files with low latency?
Thanks
You can play sounds using AssetPlayers, but this sometimes creates a noticeable difference in playback start time, yeh...
So, i recomend to decode ogg using Ogg Vorbis (like here) and then using this PCM buffer for BufferPlayer.
Btw, check this OpenSL ES wrappers
https://github.com/Suvitruf/Android-ndk/tree/master/OpenSLES

Gap buffering in Android devices

I'm building a buffering engine to play streams from url.. I need to buffering both mp3 and aac ( on device that can support it ) so I can't pass directly the url to MediaPlayer.. I tried this method: I have 2 synchronized thread, one that running creates some file with data from buffer and the second playing files created: the problem is that when mediaplayer switch from a file to another, there is a little gap... how can I remove it?? is very annoying...
Maybe my method is wrong, if so can anyone provide a working method without chopping sound??
Thank you very much in advance..
It seems you are trying to implement Gapless Playback. (Right ? )
Towards this you need to define level of Gapless Playback you want to achieve. Should it be across fileformats / codecs, audio attributes like sample rate, number of channels etc.
With your approach, you ll surely see gaps across different streams. (Fileformats , compression, audio attributes).
To achive true Gapless playback at application level (My Approach) you need to do the following
Implement custom stack, that would take the input files, decode it and produce pcm samples. This stack will have Parsers (MP3, AAC), and decoders (MP3, AAC..)
Pass pcm samples through resampler, to produce pcm samples having same sample rate.
Add buffering modules at input (File) and output (resampled pcm data).
Use AudioTrack class of Android SDK for playout.
If you stick to one fileformat, Codec and audio attributes, then at application level, you can concatenate all the files in the playlist and provide it to MediaPLayer for playback. (Since audio streams have less size, this solution can be practical. Only obstacle would be streams attributes. If the Audio OMX Components within Android Multimedia stack support dynamic reconfiguration, then this should be no issue at all)
Shash

For android media player mp3 vs. wav

I want to know if it is faster to load and play a small wav than a small mp3 file on android media player. The wavs are about 30 KB and the same files as mp3s are about 20 kb. The mp3s have the advantage to save resource space. The sound files have to be played with split second timing.
For such small sounds, you will get best results with SoundPool.
Even the weakest android devices have ample computing power to play an mp3, and probably have hardware acceleration for it as well. The real question is the setup overhead for playing a wav vs. playing an mp3, which should be fairly easy to measure programmatically.
I'm a little surprised you're getting such a poor compression ratio with mp3. Even lossless compression algorithms tend to get a 2:1 compression ratio with wav. Given that an android device probably isn't hooked up to audiophile-quality speakers, you should be able to get away with 64 kbit/s mono mp3 compression, or even lower. If you can get the file size under 4K, it'll fit in a single memory page, which is about as low as you can get for OS overhead.
If for whatever reason you're stuck with a 1.5:1 compression ratio, it's probably not worth the extra work.
Wav files use more space because they have a higher sample rate. Pretty much more points that the sound wave will trace out so in theory it would take more processing power to play a wav. Also wave is uncompressed meaning it has all of the information from the source it was taken from. When you take a cd and convert it to wav you more or less have a copy of the original. When you convert to mp3 it uses fewer reference points and detail is lost. Secondly, most mp3 encoders normalize the music which is a fancy way of saying it makes the quiet parts louder and the loud parts quieter. All this being said some people cant hear the difference and it mostly depends on what type of headphones/speakers you are listening on... ALLL that being said there shouldn't be a delay on either format the only difference should be the sample rate or "resolution" of the sound file
I have no technical "stuff" to back me up here, but since no one else has taken a crack at this, I will.
I know that mp3s have "better" compression than wavs, thus the file is smaller. This would imply, however, that it would take more cpu to "uncompress" the files. (This may be done on dedicated hardware so it could be a moot point.) Additionally, since the files will be inflated, it may be deceiving to see the mp3 file's smaller size and think it would be quicker to load and play.
Considering the wav file format's history, and that it serves as a 'lowest common denominator' when it comes to exchanging sound files between different programs (per Wikipedia), I would make an educated guess that it would be faster to load and play a small wav file. This is very dependent on Android's software implementation of audio libraries as well as the hardware so if anyone knows more, it would be great to hear their take.

Categories

Resources