I recorded some audio files to use in my app, around 50, so I would like to not record all of them again. I recently used SoundPool to play the audio files on a real device instead of the emulator and you can barely hear them. On the emulator with my PC volume set to max and device to max, I can hear it fine. Should I try to record the files louder or is there another option?
I've found that when targeting mobile devices (and cheap/small laptop speakers for that matter), it is best to do two things to your audio:
Compression: I do not mean data compression, I mean dynamic contrast compression. This will remove some of the level differences between loud and soft parts of the recording, allowing it all to be heard better.
Normalization: When you normalize audio, you take the loudest part of the audio, and scale the entire audio clip up so that the loudest part is at the loudest that can be stored in the audio file.
You can do both of these easily with any audio editing software, such as Audacity.
Finally, you should also keep in mind the reproduceable frequencies on such small speakers.
Most of these speakers are built with speech in mind. Because of this, you will find that they tend to be the loudest in the 700Hz-2.5kHz range.
That is, if your sound effects are low in frequency (think bass), then it will be almost impossible to hear them on a phone's small speaker which cannot reproduce such low frequencies.
If you have more questions on the matter, please visit https://video.stackexchange.com/.
If it is the volume of the recorded files, you can change it using a normalizer like MP3Gain.
Related
I have a cross-platform(iOS and Android) app where I will record audio clips then send it to the server to do some machine learning operations. In my iOS app, I use AVAudioRecorder for recording the audio. In the Android app, I use MediaRecorder for recording the audio. In the mobile initially, I use m4a format because of size constrictions. After reaching the server I will convert it to wav format before using it in the ML operations.
My Problem is, in iOS the AVAudioRecorder by OS default does a factor of Amplification to the raw audio data before we the developer get access to the raw data. But in Android, the MediaRecorder doesn't provide any sort of default Amplification to the raw data. In other words, in iOS I will never get the raw audio stream from the microphone whereas in Android I will always only get the raw audio stream from the microphone. The distinction is clearly visible if you can record the same audio in both iPhone and Android phones side by side with a common audio source, then import the recorded audio in Audacity for visual representation. I have attached a sample representation screenshot below.
In the image, the first track is the Android recording and the second track is from the iOS recording. When I hear both the audio through headphones I can vaguely distinguish them but when I visualize the data points, you can clearly see the difference in the image. These distinctions are bad for ML operations.
Clearly in the iPhone, there is a certain amplification factor involved which I would like to implement in the Android also.
Is anyone aware of the amplification factor? OR are there any other possible alternatives?
It's quite possible that the difference is that the effect of Automatic Gain Control.
You can disable this in your app's AVAudioSession by setting its mode to AVAudioSessionModeMeasurement which you do once in your application - usually at startup. This disables a great deal of input signal processing.
Reading your problem description, you might be better off enabling AGC on Android.
If neither of these yields results, you might want to gain scale both signals so they are just below clipping.
let audioSession = AVAudioSession.sharedInstance()
audio.session.setMode(AVAudioSessionModeMeasurement)
The audio recorded by MediaRecorder in android App having so much noise.How can I use noise suppression to remove noice while recording.
I had the same problem of low audio quality while using MediaRecorder and finally figured out the correct working solution. Here are few modifications you need to do for good quality audio recordings:
save the file using .m4a extention.
and
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_NB);
mRecorder.setAudioEncodingBitRate(16*44100);
mRecorder.setAudioSamplingRate(44100);
Many solutions on stackoverflow would suggest .setAudioEncodingBiteRate(16) but 16 is too low to be considered meaningless .
Source: #Grant answer on stackoverflow very poor quality of audio recorded on my droidx using MediaRecorder, why?
Are you testing this with the emulator, or on an actual device (if so, which device)? The acoustic tuning (which includes gain control, noise reduction, etc) will be specific to a given platform and product, and is not something you can change.
Jellybean includes APIs to let applications apply certain acoustic filters on recordings, and a noise suppressor is one of those. However, by using that API you're limiting your app to only function correctly on devices running Jellybean or later (and not even all of those devices might actually implement this functionality).
Another possibility would be to include a noise suppressor in your app. I think e.g. Speex includes noise supressing functionality, but it's geared towards low-bitrate speech encoding.
https://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html
android voice recording - voice with background noice issue
I'm using EZAudio FFT to analyze audio as the iPhone "hears" it. I am listening for high-pitched sounds embedded into music (17 kHz+). When the iPhone hears the sounds with no music, it records the data perfectly and hears the pitch fine. However, when music is playing the sounds are no longer heard--or only 1 in about 8 are heard. Again, I am using EZAudio, to analyze the sound. I have an Android phone that has a similar app on it (displays an graph of Hz for incoming audio waves), but the Android phone can hear these sounds.
Why would the Android phone hear these high-pitched sounds but not the iPhone? Is it because of a flaw in EZAudio or is it due to a higher quality microphone?
The answer is most likely answer is Automatic Gain Control (AGC). This is enabled by default on the microphone, and is useful for telephony or voice recording.
At 17kHz, you're probably already at a frequency at which the microphone is not particularly sensitive, however, in the absence of audio at other frequencies, the AGC will have increase the gain of the microphone. As soon as other frequencies are present, the gain reduces again, and the 17kHz signal is in the noise.
Looking at the EZAudioFFT source code, it doesn't appear to be setting up the AVAUdioSession to use measurement-mode (which disables AGC, and the HPF on the microphone).
You can achieve this with:
NSError *pError = nil;
[[AVAudioSession sharedInstance] setMode:AVAudioSessionModeMeasurement];
So I have an app where mp3 file is being played using the MediaPlayer. On most devices everything is fine but on Samsung and some other (like HTC One S) devices the same mp3 plays "too fast" (skipping gaps): looks like player does not handle sound gaps (silence) correctly. These mp3s are just speech and speech naturally has gaps (silence) between spoken words. And these gaps are not played correctly in terms of time - MediaPlayer just skips them. As result mp3 is played faster by the duration of all gaps it contains.
What could be a reason and solution for this?UPDATEI'd found that its about frequency+VBR. Somehow if mp3 is of 22050/24000/32000 Hz instead of 44100 or 48000 and VBR or ABR is used the issue raises up. Im using LAME for mp3 encoding. If I remove "--resample 22.05" option so the resulting mp3 becomes 44.1kHz there is no issue playing this mp3 on samsung phone. However the resulting size of mp3 becomes twice bigger which is not acceptable for me cuz in this case my apk becomes bigger than 50Mb. So now the question is how to properly compress mp3 as 22kHz/VBR/MONO.
The issue was fixed in the following way: I added a white noise to an original sound and then encoded it to MP3 format. Resulting files became bigger in size but also they become more compatible (with Samsung devices) The original audio file (made at recording studio) is too clean meaning that silence/pauses in speech (between pronounced words) has no waveform if look in sound editor, its like an ideal silence. So on variuos Samsung devices such MP3-encoded files played with described issue. However on most other devices and PCs such MP3 files played just fine. Once again - Samsung "rules"!
You need to Google our for controlling playback speed in your application I mean to say that there must be some sort of 'playback rate' variable which must be a floating point value something between 0 to 1. This might help you in some workarounds for your app hope you find this somewhat helpful in anyway . O by the way here are some useful links that might help you out as well and if not then we have to keep waiting in the waiting queue for Samsung ;-) if its specifically related to them happy coding
http://code.google.com/p/android/issues/detail?id=1961
play an mp3 with MediaPlayer class on Android issues
Regards
Anas.
I have an application that plays back AMR audio files that it has downloaded and cached locally.
This works fine — the basic MediaPlayer does its job.
However, the audio volume is generally very low, and manually increasing the volume with the hardware keys still doesn't make the playback quite loud enough.
The behaviour seems to vary across devices — Sony Ericssons are particularly low, HTC devices are reasonable, and the Samsung Galaxy S is actually very loud when the volume is turned up to the maximum.
Are there any relatively simple approaches, using the Android SDK, that could, say, double the volume while playing back from the AMR file?
I note that AudioTrack allows you to manipulate audio, but this seems to be for raw PCM streams.