Re-sampling audio alters the speed - android

I need to mix audio files with different sample rates. I'm trying to change the sample rate of all the audio files to say 44100Hz. But when I re-sample the audio files, the speed of the original audio is altered (becomes slow). Is there a way to re-sample the audio without altering the speed? I'm using the following function to re-sample the audio file.
http://code.google.com/p/musicg/source/browse/src/com/musicg/dsp/Resampler.java

When you play the resampled audio you should play it with the new sample rate. You may be using the resampled audio at a faster sample rate.

Related

How to record microphone to more compressed format during WebRTC call on Android?

I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234

Android custom audio player to replace MediaPlayer with libPd or OpenSL ES or AudioTrack

I have already developed Streaming Audio application using MediaPlayer API. All the features are works fine, expect it takes more time to start the playback (buffer time is more).
i want to add recording live audio stream(save the live stream data in disk, not the recording from MIC). as MediaPlayer does not provide any API to access the raw data stream, i am planning to build custom audio player.
i want to control the buffering time, access to the raw audio stream, should able to play all the audio format which are supported in android natively. which api (libPd or OpenSL ES or AudioTrack) will be suitable to build the custom audio player in Android?
In my experience OpenSL_ES would be the choice, here there is a Link that explains how to do audio streaming that you may find useful. bufferframes determines how many samples you will collect before playing, so smaller bufferframes faster response time, but you have to balance that with your device processing capabilities.
You can record with libpd (pd-for-android) too.
All the recording process is managed by libpd.
Check the ScenePlayer project, it uses libpd and lets you record audio into a folder on sdcard:
https://github.com/libpd/pd-for-android/tree/master/ScenePlayer

Gap buffering in Android devices

I'm building a buffering engine to play streams from url.. I need to buffering both mp3 and aac ( on device that can support it ) so I can't pass directly the url to MediaPlayer.. I tried this method: I have 2 synchronized thread, one that running creates some file with data from buffer and the second playing files created: the problem is that when mediaplayer switch from a file to another, there is a little gap... how can I remove it?? is very annoying...
Maybe my method is wrong, if so can anyone provide a working method without chopping sound??
Thank you very much in advance..
It seems you are trying to implement Gapless Playback. (Right ? )
Towards this you need to define level of Gapless Playback you want to achieve. Should it be across fileformats / codecs, audio attributes like sample rate, number of channels etc.
With your approach, you ll surely see gaps across different streams. (Fileformats , compression, audio attributes).
To achive true Gapless playback at application level (My Approach) you need to do the following
Implement custom stack, that would take the input files, decode it and produce pcm samples. This stack will have Parsers (MP3, AAC), and decoders (MP3, AAC..)
Pass pcm samples through resampler, to produce pcm samples having same sample rate.
Add buffering modules at input (File) and output (resampled pcm data).
Use AudioTrack class of Android SDK for playout.
If you stick to one fileformat, Codec and audio attributes, then at application level, you can concatenate all the files in the playlist and provide it to MediaPLayer for playback. (Since audio streams have less size, this solution can be practical. Only obstacle would be streams attributes. If the Audio OMX Components within Android Multimedia stack support dynamic reconfiguration, then this should be no issue at all)
Shash

Audio Android.Mixing a song and a voice recorded!

I am developing a karaoke app and i have been dealing with audio from android.I want an opinion on how can i save a single song from a voice recorded and another song playing in the background.
Thanks
You can try FFMPEG Library for Android, consider using android wrapper. There are several SO answers of help
Audio song mixer in android programmatically
How to overlay two audio files using ffmpeg
FFmpeg on Android
You may want to check amerge and amix ffmpeg filters based on your needs
SoundPool is a good option if requirements can be met just by playing two audio streams together.
SoundPool can also manage the number of audio streams being rendered
at once. When the SoundPool object is constructed, the maxStreams
parameter sets the maximum number of streams that can be played at a
time from this single SoundPool.
Also note Android - Lollipop version now supports multi-channel audio stream mixing
Multi-channel audio stream mixing means professional audio
applications can now mix up to eight channels including 5.1 and 7.1
channels.
Multi-channel support
If your hardware and driver supports multichannel audio via HDMI, you
can output the audio stream directly to the audio hardware.
Unfortunately, I don't think there's an "On device" way to combine two audio files into one. The best you can do is play both at the same time from what I've read.
See Sound Pool Documentation

Android audio and voice processing

I am new to android and presently doing android voice recording application. I want top know which format is best for saving audio file in android. (i.e RAW-AMR or 3gp or mp4).So rhat we can hear playback sound loudly in device.
Is there any alternative way to increase audio sound through voice processing in android.
Thanks in advance.
Question: Which bear is best? Answer: Black Bear
Seriously though, you would need to state your criteria for the audio file for us to make a codec recommendation. Does it need to be portable? Best compression? Highest fidelity?
The codec that you choose has no affect on the loudness of audio that will be played over the device, so this should not factor into your criteria.
Is there an alternative way to increase audio?
Yes, if you are recording audio from the microphone then you can amplify the audio data before you save it to a file.
Let an audio sample from the microphone be represented by the function:
f(t)
Amplification is achieved by multiplying the audio sample by some factor A
A * f(t)
You can use AGC(Automatic Gain Control) module from WebRTC to increase sound level.
I didn't find any simple Java API yet. You can use C++ API via JNI.
Have a look here, WebRTC AGC (Automatic Gain Control) .
I want top know which format is best for saving audio file in android.
To save voice audio on Android (or any other platform), take a look at Opus. It's a free, state-of-the-art audio codec that also supports voice mode.

Categories

Resources