I am developing a game for Android and the Desktop with LibGDX. I am having a problem with playing sounds. The game is a labyrinth style game, there are balls that roll around on the device using the accelerometer. When balls hit the border, or one another a sound is played. The volume is set based on the linear velocity of the collision. The problem is, when the balls get really close to the border, they bounce many times in a small period of time. This ends up bogging down the main thread, and the UI starts to stutter. In log-cat it says "reducing sample rate" or something like that, because it can't handle the load. Also, when there are a bunch of collisions, the sounds keep playing after there aren't anymore collisions.
I need each of the sounds to be played independently of the other sounds. I was thinking, maybe creating a separate thread for the sounds. Any help would be greatly appreciated.
I working now with the sounds of my game. The last LibGDX version works fine playing a lot of sounds simultaneously. All you need to do is, if you plan to play them on the same time is control the number of maximum sounds played (more sounds requires more resources of the device) and reduce the sample rate and quality of the most played. You can resample your sound with Audacity. Try to save it as a OGG file with less quality and try again. Also, you can create your sound as static and play it many times from the same sound without create a new one.
Hope this helps you.
Related
I'm making a really small game for android, and trying to add sounds to the game
I'm using the MediaPlayer class to load the audio file (.ogg or .wav)
I want to use .ogg (or .mp3) to shrink the size of the apk, rather than using .wav files.
I understand why loading (i.e. creating the MediaPlayer from a .ogg would take longer than .wav) (compression)
BUT the problem is that when I put the audio on loop by audio.setLooping(true) each time the audio starts again, it lags the game significantly
Why ? does the audio gets decoded each time it's going to start ? even on a loop ?
Also, on the CPU usage, I see spikes marking the beginning of the audio in the loop. so I'm pretty sure that the loop is really causing the lag..
any explanations/solutions?
(P.s. I'm testing on a really low-end physical phone, but for the game it's fairly enough, the sudden spikes are what's causing the problem not the actual usage)
I am developing a musical application that plays several sounds all together, like a sort of multitrack program. A timer triggers when it's the moment for all the sounds to play. Sounds have to be played exactly on the beat and perfectly synchronized and mixed.
The easiest solution i found is to have one MediaPlayer for each sound, initialize all of them at the very beginning, with MediaPlayer.create() inside my app's OnCreate().
To make it more reliable and quick I made TWO mediaplayers for each sound.
Then I have a timer that calls simple loop, similar to this:
for (int w=0; w<SOUNDS; w++)
if (must_play(w)) {
if (mp[w].isPlaying()) {
if (mp2[w].isPlaying())
mp[w].seekTo(0);
else mp2[w].start();
} else mp[w].start();
}
I used .seekTo(0) because I found it slightly faster than making .stop() and .start().
But the sounds are not always perfectly synchronized. A 1/10 of second of delay between the .start() of two mediaplayers is very annoying if those two sounds are two drums that are supposed to play perfectly in line.
Is there a way to force all mediaplayers that are instructed to .start() to effectly play all at once?
Please note that the sounds may be very short, like drum sounds: the problem is not to keep the multiple media players synchronized over time, but to make them start exactly all together, without delay.
The question is rather tricky because in my opinion involves two problems:
Prioritization, above other processes that may slow down the timer or the app itself.
How to create a single command/object/method (or whatever) that runs all the sounds atomically.
Thank you.
I made it, just used the SoundPool class instead of MediaPlayer. It's much more responsive, and handles very well multitracks, even cutting the oldest sound played if the streams are not enough. Although the questions weren't answered, the problem is fixed.
I have searched for this online, but am still a bit confused (as I'm sure others will be if they think of something like this). I'd like to preface by saying that this is not for homework and/or profit.
I wanted to create an app that could listen to your microwave as you prepare popcorn. It would work by sounding an alarm when there's a certain time interval between pops (say 5-6 seconds). Again, this is simply a project to keep me occupied - not for a class.
Either way, I'm having trouble trying to figure out how to analyze the audio intake in real-time. That is, I need a way to log the time when a "pop" occurs. So that you guys don't think I didn't do any research into the matter, I've checked out this SO question and have extensively searched the AudioRecord function list.
I'm thinking that I will probably have to do something with one of the versions of read() and then compare the recorded audio every 2 seconds or so to the recorded audio of a "pop" (i.e. if 70% or more of the byte[] audioData array is the same as that of a popping sound, then log the time). Can anyone with Android audio input experience let me know if I'm at least on the right track? This is not a question of me wanting you to code anything for me, but a question as to whether I'm on the correct track, and, if not, which direction I should head instead.
I think I have an easier way.
You could use the MediaRecorder 's getMaxAmplitude method.
Anytime your recorder detects a big jump in amplitude, you have detected a corn pop!
Check out this code (ignore the playback part): Playing back sound coming from microphone in real-time
Basically the idea is that you will have to take the value of each 16-bit sample (which corresponds to the value of the wave at that time). Using the sampling rate, you can calculate the time between peaks in volume. I think that might accomplish what you want.
this may be a bit overkill, but there is a framework from MIT media labs called funf: http://code.google.com/p/funf-open-sensing-framework/
They already created classes for audio input and some analysis (FFT and the like), also saving to files or uploading is implemented as far as I've seen, and they handle most of the sensors available on the phone.
You can also get inspired from the code they wrote, which I think is pretty good.
I have been scratching my head for the past week to do this effect on the text. http://www.youtube.com/watch?v=gB2PL33DMFs&feature=related
Would be great if someone can give me some tips or guidance or tutorial on how to do this.
thankz for reading and answering =D
If all you want is to display a movie with video and sound, a MediaPlayer can do that easily.
So I assume that you're actually talking about synchronizing some sort of animated display with a sound file being played separately. We did this using a MediaPlayer and polling getCurrentPosition from within an animation loop. This more or less works, but there are serious problems that need to be overcome. (All this deals with playing mp3 files; we didn't try any other audio formats).
First, your mp3 must be recorded at 44,100 Hz sampling rate. Otherwise the value returned by getCurrentPosition is way off. (We think it's scaled by the ratio of the actual sampling rate to 44,100, but we didn't verify this hypothesis.) A bit rate of 128,000 seems to work best.
Second, and more serious, is that the values returned by getCurrentPosition seem to drift away over time from the sound coming out of the device. After about 45 seconds, this starts to be quite noticeable. What's worse is that this drift is significantly different (but always present) in different OS levels, and perhaps from device to device. (We tested this in 2.1 and 2.2 on both emulators and real devices, and 3.0 on an emulator.) We suspected some sort of buffering problem, but couldn't really diagnose it. Our work-around was to break up longer mp3 files into short segments and chain their playback. Lots of bookkeeping aggravation. This is still under test, but so far it seems to have worked.
Ted Hopp: time drifting on MP3 files is likely caused by those MP3 files being VBR. I've been developing Karaoke apps for a while, and pretty much every toolkit - from Qt Phonon to ffmpeg - had problems reporting correct audio position on variable MP3 files. I assume this is because they all try to calculate the current audio position by using the number of decoded frames, which makes it unreliable for VBR MP3s. I described it in a user-friendly way in the Karaoke Lyrics Editor FAQ
Unfortunately the only solution I found is to recode MP3s to CBR. Another was to ditch the current position completely, and rely only on system clocks. That actually produced a better result for VBR MP3s, but still not as good as recoding them into CBR.
I am aware that SoundPool was intended to handle small fx like sounds and I made sure my 4 sound clips which I want to play one by one in some sequence are small enough.
I used ogg quality 0 and clips are 35kb, 14kb, 21kb and 23kb totaling 92kb of compressed audio. I have no idea how to estimate what the uncompressed size would be, but it should not be a lot, right?
So when I play 4 sounds in sequence, it works well for first 9 times (9 sequences x 4 sounds) but starts to cause memory issues on the nines sequence for one of the sounds. It is always sequence 9 when I start to see error.
What is the best way to handle that? I have a few ideas:
1) compress sounds even more (ogg quality -1 and mono instead of stereo)
2) unload and load sounds constantly using SoundPool.load and SoundPool.unload
3) release and recreate soundPool instance time from time
Is there anything else I can do? It is embarrassing that android api cannot handle so small clips. I wonder how people create games with a lot of sound effects...
Errors look like that:
ERROR/AudioFlinger(35): not enough memory for AudioTrack size=1048640
DEBUG/MemoryDealer(35): AudioTrack (0x25018,size=1048576)
It seems I was able to resolve my issue after:
1) downsampled my sound clips from 44 to 22khz. Uncompressed size was cut in half (I figured how to estimate uncompressed sound size - export your clip to uncompressed wav). I used nice open source tool Audacity
2) trim the sounds more to reduce duration
3) put try/catch around play() just in case (it does catch errors when it try to play the sound but cannot)
That seems odd. SoundPool should be expanding your audio clips into memory when it loads them, and once loaded I wouldn't expect you to run into memory issues later. When I've run into memory issues it was right at the beginning, not later on. You sure you're not loading more sounds later on?
To answer your other question, some games use JET Player and MIDI sounds instead of SoundPool. The JetBoy sample program that comes in the Android SDK is a great example of how to do this type of sound work in an app.