Amplifying playback volume from a local AMR file - android

I have an application that plays back AMR audio files that it has downloaded and cached locally.
This works fine — the basic MediaPlayer does its job.
However, the audio volume is generally very low, and manually increasing the volume with the hardware keys still doesn't make the playback quite loud enough.
The behaviour seems to vary across devices — Sony Ericssons are particularly low, HTC devices are reasonable, and the Samsung Galaxy S is actually very loud when the volume is turned up to the maximum.
Are there any relatively simple approaches, using the Android SDK, that could, say, double the volume while playing back from the AMR file?
I note that AudioTrack allows you to manipulate audio, but this seems to be for raw PCM streams.

Related

Android: play multiple mp3s simultaneously, with precise sync and independent volume control

I want to create an Android app that plays multiple mp3s simultaneously, with precise sync (less than 1/10 of a second off) and independent volume control. Size of each mp3 could be over 1MB, run time up to several minutes. My understanding is that MediaPlayer will not do the precise sync, and SoundPool can't handle files over 1MB or 5 seconds run time. I am experimenting with superpowered and may end up using that, but I'm wondering if there's anything simpler, given that I don't need any processing (reverb, flange, etc.), which is superpowered's focus.
Also ran across the YouTube video on Android high-performance audio, from Google I/O 2016. Wondering if anyone has any experience with this.
https://www.youtube.com/watch?v=F2ZDp-eNrh4
Superpowered was originally made for my DJ app (DJ Player in the App Store), where precisely syncing multiple tracks is a requirement.
Therefore, syncing multiple mp3s and independent volume control is definitely possible and core to Superpowered. All you need is the SuperpoweredAdvancedAudioPlayer class for this.
The CrossExample project in the SDK has two players playing in sync.
The built-in audio features in Android are highly device and/or build dependent. You can't get a consistent feature set with those. In general, the audio features of Android are not stable. That's why you need a specialized audio library which does everything "inside" your application (so is not a "wrapper" around Android's audio features).
When you are playing compressed files (AAC, MP3, etc) on Android in most situations they are decoded in hardware to save power, except when the output goes to a USB audio interface. The hardware codec accepts data in big chunks (again, to save power). Since it's not possible to issue a command to start playing multiple streams at once, what will often be happening is that one stream will already send a chunk of compressed audio to hardware codec, and it will start playing, while others haven't yet sent their data.
You really need to decode these files in your app and mix the output to produce a single audio stream. Then you will guarantee the desired synchronization. The built-in mixing facilities are mostly intended to allow multiple apps to use the same sound output, they are not designed for multitrack mixing.

Hardware limitations with iPhone Microphone

I'm using EZAudio FFT to analyze audio as the iPhone "hears" it. I am listening for high-pitched sounds embedded into music (17 kHz+). When the iPhone hears the sounds with no music, it records the data perfectly and hears the pitch fine. However, when music is playing the sounds are no longer heard--or only 1 in about 8 are heard. Again, I am using EZAudio, to analyze the sound. I have an Android phone that has a similar app on it (displays an graph of Hz for incoming audio waves), but the Android phone can hear these sounds.
Why would the Android phone hear these high-pitched sounds but not the iPhone? Is it because of a flaw in EZAudio or is it due to a higher quality microphone?
The answer is most likely answer is Automatic Gain Control (AGC). This is enabled by default on the microphone, and is useful for telephony or voice recording.
At 17kHz, you're probably already at a frequency at which the microphone is not particularly sensitive, however, in the absence of audio at other frequencies, the AGC will have increase the gain of the microphone. As soon as other frequencies are present, the gain reduces again, and the 17kHz signal is in the noise.
Looking at the EZAudioFFT source code, it doesn't appear to be setting up the AVAUdioSession to use measurement-mode (which disables AGC, and the HPF on the microphone).
You can achieve this with:
NSError *pError = nil;
[[AVAudioSession sharedInstance] setMode:AVAudioSessionModeMeasurement];

MediaPlayer issue - plays MP3s too fast (skips encoded silence)

So I have an app where mp3 file is being played using the MediaPlayer. On most devices everything is fine but on Samsung and some other (like HTC One S) devices the same mp3 plays "too fast" (skipping gaps): looks like player does not handle sound gaps (silence) correctly. These mp3s are just speech and speech naturally has gaps (silence) between spoken words. And these gaps are not played correctly in terms of time - MediaPlayer just skips them. As result mp3 is played faster by the duration of all gaps it contains.
What could be a reason and solution for this?UPDATEI'd found that its about frequency+VBR. Somehow if mp3 is of 22050/24000/32000 Hz instead of 44100 or 48000 and VBR or ABR is used the issue raises up. Im using LAME for mp3 encoding. If I remove "--resample 22.05" option so the resulting mp3 becomes 44.1kHz there is no issue playing this mp3 on samsung phone. However the resulting size of mp3 becomes twice bigger which is not acceptable for me cuz in this case my apk becomes bigger than 50Mb. So now the question is how to properly compress mp3 as 22kHz/VBR/MONO.
The issue was fixed in the following way: I added a white noise to an original sound and then encoded it to MP3 format. Resulting files became bigger in size but also they become more compatible (with Samsung devices) The original audio file (made at recording studio) is too clean meaning that silence/pauses in speech (between pronounced words) has no waveform if look in sound editor, its like an ideal silence. So on variuos Samsung devices such MP3-encoded files played with described issue. However on most other devices and PCs such MP3 files played just fine. Once again - Samsung "rules"!
You need to Google our for controlling playback speed in your application I mean to say that there must be some sort of 'playback rate' variable which must be a floating point value something between 0 to 1. This might help you in some workarounds for your app hope you find this somewhat helpful in anyway . O by the way here are some useful links that might help you out as well and if not then we have to keep waiting in the waiting queue for Samsung ;-) if its specifically related to them happy coding
http://code.google.com/p/android/issues/detail?id=1961
play an mp3 with MediaPlayer class on Android issues
Regards
Anas.

Android MediaPlayer Stream buffering

I'm making an android app that can play audio streams from URLs.
Activity contains a simple Play/Pause button and a Seekbar.
I've managed to play the stream by running the media player as a service but now I want to achieve buffering.
Buffering in the sense that, when the mediaplayer starts, I want the stream to be saved in a buffer or a temporary location.
Also I want to show this buffered amount in the seekbar (probably as a secondary progress) & when the user drags the seekbar to any buffered point, the player should play that stream without any disruption.
So, I'm seeking a decent tutorials for this. I tried searching but not able to find a simpler solution for Version >= 2.3.
(But, if it works for 2.2 also, that be great too)
Thank You
I used the internet radio streaming posted in an answer here: Online radio streaming app for Android
And then I used some of this guy's work for implementing a visualizer: https://github.com/felixpalmer/android-visualizer
I'm having a trouble on the Samsung S3 though. Have you had any issues with streaming internet radio? It works fine on Samsung Music, Note, Note II, S II, S III mini, (even) S4 and Google Nexus phones, but supposedly because of some firmware buffer setting on the S3, it's just not streaming music on an S3.
I've gotten results like streaming after 2-5 minutes, but who wants to wait 5 minutes for a radio stream to start - no one!
I've discovered that it sometimes depends on the bitrate of the stream. For example, a 96 kb/s bitrate stream started playing instantly on the S3, while a 128 kb/s bitrate stream took 2+ minutes. Which actually just does not make sense, since one would think that the higher bitrate stream would fill the buffer quicker than the lower bitrate stream.
Do you have any advice or suggestions for me? I have questions here and here.

Audio Sound Too Low in Android App

I recorded some audio files to use in my app, around 50, so I would like to not record all of them again. I recently used SoundPool to play the audio files on a real device instead of the emulator and you can barely hear them. On the emulator with my PC volume set to max and device to max, I can hear it fine. Should I try to record the files louder or is there another option?
I've found that when targeting mobile devices (and cheap/small laptop speakers for that matter), it is best to do two things to your audio:
Compression: I do not mean data compression, I mean dynamic contrast compression. This will remove some of the level differences between loud and soft parts of the recording, allowing it all to be heard better.
Normalization: When you normalize audio, you take the loudest part of the audio, and scale the entire audio clip up so that the loudest part is at the loudest that can be stored in the audio file.
You can do both of these easily with any audio editing software, such as Audacity.
Finally, you should also keep in mind the reproduceable frequencies on such small speakers.
Most of these speakers are built with speech in mind. Because of this, you will find that they tend to be the loudest in the 700Hz-2.5kHz range.
That is, if your sound effects are low in frequency (think bass), then it will be almost impossible to hear them on a phone's small speaker which cannot reproduce such low frequencies.
If you have more questions on the matter, please visit https://video.stackexchange.com/.
If it is the volume of the recorded files, you can change it using a normalizer like MP3Gain.

Categories

Resources