I have developed a app which record the sound & play at same time approximate 100ms delay by using audio track & audio record class in android. But the played audio have lot of background noise. So there is any way in android to reduce the background noise while playing.
Please don't tell me used to Audacity Software for reduce the noise because i am not saving the recorded audio. I am just save it in a buffer & play this buffer by audio track.Cab be implemented a filter to reduce the background noise in android using NDK.
The simple thing to do is play the audio back using the voice setting. It will filter out most of the "hum" noise. AudioManager.STREAM_VOICE_CALL. You can use the NoiseSupression filter if available. NoiseSuppressor.create(device.getAudioSessionId());
You can use ClearRecord application. It is free. You can play audio with noise reduction on. While playing audio, the background noise gets eliminated and clear one can hear clear sound.
Related
I made an Android app to record and play wildlife sounds. It also shows a waveform of the volume of the recording as it records, and when you load a song it also shows you the waveform. I did this reading the bytes[] of the WAVE file, translating to shorts[], and showing the graph. Works as expected.
The problem with wildlife sounds is that they are usually very quiet. Of course, the device volume at play time is at its max, but even at max the volume is usually still too low. So I implemented a gain control at recording time: it just multiplies the shorts[] from AudioRecord (taking care of clipping) while printing the waveform, and then saves them to the WAVE file. Again, it works as expected.
But now I am thinking that I am doing everything wrong: why to apply gain before saving? This may introduce clipping ruining the capture. Much better is to implement gain at play time.
But I do not know how to do this. I reproduce the sound with MediaPlayer, and although it has a setVolume(float,float) control, this does not actually increase the volume.
How can I send a stream with gain to MediaPlayer?
If this cannot be done, how can I add volume gain at play time? Maybe AudioTrack? Never used it...
How can I play background audio, in Android, without interrupting the MediaPlayer playback, by either using MediaPlayer (preferred) or OpenSL ES?
I know SoundPool is able to play sound effects without interrupting any MediaPlayer playback, but the size is limited to 1M per effect, which is way to less. Not requesting audio focus, via AudioManager doesn't seem to work either, audio doesn't play at all in this case.
And in the case of OpenSL ES, all audio generally stops when I start to play a longer asset file. It's similar to the behaviour of SoundPool described above.
Edit from the comments:
I don't want to interrupt other music players, it's the background
audio of a game, which shall play without interrupting the, for
example, music of a player. Games like Subway Surfer, Clash Royale and
such seem to have this achieved somehow, but I could not achieve it
via OpenSL ES, or MediaPlayer.
In order to play sound in background you can use SoundPool, AudioTracks and OpenSlES.
Soundpool: Use small files and make a sequence. In my last project i use 148 sound files (all small) in different scenarios and played using soundpool. Make a list and play one by one or in parallel. Also in games usually you have a small set of sound for particular scenario and it loops. Best is soundpool for that. You can also perform some effects like rate change. Also ogg files are way small, so use them.
AudioTrack: You can use RAW PCM data in audio track. If you want you can extract pcm data using MediaExtractor from almost all formats and use it. But it will be a little work for you in your case, but it is really good (supports huge data, even live streaming).
OpenSLES: Apparently android uses opensles in background for all its purpose. So using it will help you a lot. But it's not easy to get everything done on it. You need to learn more for lesser work.
I have been deeply working on OpenSlES for about 20 days and still i will say for small purpose use soundpool, medium to high level implementation use AudioTracks and for kickass implementation use OpenSLES.
PS: It will be bad effect on your user if you play game sound in background while they are playing their music or their call. Personal experience.
I am working on an Android project in which I want the same functionality as the Android native audio recorder. Specifically the sound quality option. I can't use the native recorder, so I'm recording audio using the AudioRecord class. How do I process the data coming from the AudioRecord class for the quality? Application has three quality defined which I need to implement:
Low - records will have only high pitch sounds,
Medium - records will have some of background sound,
High - everything that reaches at microphone Will be Recorded.
Please suggest some way to do it.
We are developing a VOIP application, there is one component which need to record the audio from mic, and play the remote audio to speaker. And we need to do some audio/signal processing for the recorded audio.
But on some android device, the selected mic and speaker is so near, the audio captured from MIC clipping (too loud) because of the audio played by speaker. This cause the captured audio waveform have nonlinear losses, and make the audio/signal processing component doesn't work.
We doesn't want to set AUDIO_STREAM_VOICE_CALL to enable build-in AEC, because it will make the recorded audio sample rate to be 8k while I'd like the recorded audio to be 48k.
So We have consider following solution:
Decrease the mic volume. Base on this SO question and this discussion thread, it seams impossible.
Using specific speaker and mic to make the distance a little bit far, so the mic captured audio volume is low.
So any way to select specific speaker on android platform?
If the distance between microphone and the speaker is crucial here maybe is would be enough to use camera's mic:
MediaRecorder.AudioSource.CAMCORDER
I want to make music app (android)
user recording voice and app is change piano or guitar sound
so i made recording part and recording voice analyze using FFT.
The problem is how to play instrument sound?
if using Thread , can't play changed beat just play sound regularly.
i use sound file to play instrument sound(.mid, .wave ... etc) in raw folder
plz help me, how to play instrument sound??
One common way of doing this is using audio analysis and resynthesis. For analysis you would use a pitch estimation algorithm (not just an FFT). Then you might feed the output of the audio analysis (estimated pitch, bandwidth, amplitude, etc.) to a real-time music instrument waveform synthesis module used to feed the audio output, usually using short buffers (some number of milliseconds) on a periodic callback. There are many many synthesis algorithms of varying quality.
This technique appears to be used by several iOS/iPhone apps. Not sure about the latest Android APIs, but with earlier Android versions, the min latency permitted by the OS API was reported as long and not very good.