How can I setRate for Android MediaPlayer? - android

How can I implement
setRate(float f)
for my Android MediaPlayer, and secondly is it posible?

I believe this is the function you are looking for.
This sets the sampling rate at which the audio data will be consumed and played back, not the original sampling rate of the content. Setting it to half the sample rate of the content will cause the playback to last twice as long, but will also result in a negative pitch shift. The valid sample rate range is from 1Hz to twice the value returned by getNativeOutputSampleRate(int).
If you want to play mp3 directly using AudioTrack, you can either have a look at this example or convert your mp3 file to wav format, which enables AudioTrack to use it without hassle. This is the tradeoff you should account for if you want to adjust the playback rate easily.

Android 6.0 adds PlaybackParams for MediaPlayer, so you can now do this:
String recordingPath = recordingDirectory + File.separator + "music.mp3";
MediaPlayer audioPlayer = MediaPlayer.create(getApplicationContext(), Uri.parse(recordingPath));
audioPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
PlaybackParams params = new PlaybackParams();
params.setSpeed(0.75f);
audioPlayer.setPlaybackParams(params);
audioPlayer.start();
I don't have an Android 6 device yet, but this works for me in the emulator.

Based on the Android developer documentation, you may have to use SoundPool instead.
Android Developer: Media SoundPool-setRate
public final void setRate (int streamID, float rate)
Change playback rate. The playback rate allows the application to vary
the playback rate (pitch) of the sound. A value of 1.0 means playback
at the original frequency. A value of 2.0 means playback twice as
fast, and a value of 0.5 means playback at half speed. If the stream
does not exist, it will have no effect.
Parameters
streamID: a streamID returned by the play() function
rate: playback rate (1.0 = normal playback, range 0.5 to 2.0)

Related

Android oboe c++ Some sounds distorted on playback

I'm using the Android oboe library for high performance audio in a music game.
In the assets folder I have 2 .raw files (both 48000Hz 16 bit PCM wavs and about 60kB)
std_kit_sn.raw
std_kit_ht.raw
These are loaded into memory as SoundRecordings and added to a Mixer. kSampleRateHz is 48000:
stdSN= SoundRecording::loadFromAssets(mAssetManager, "std_kit_sn.raw");
stdHT= SoundRecording::loadFromAssets(mAssetManager, "std_kit_ht.raw");
mMixer.addTrack(stdSN);
mMixer.addTrack(stdFT);
// Create a builder
AudioStreamBuilder builder;
builder.setFormat(AudioFormat::I16);
builder.setChannelCount(1);
builder.setSampleRate(kSampleRateHz);
builder.setCallback(this);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
LOGD("After creating a builder");
// Open stream
Result result = builder.openStream(&mAudioStream);
if (result != Result::OK){
LOGE("Failed to open stream. Error: %s", convertToText(result));
}
LOGD("After openstream");
// Reduce stream latency by setting the buffer size to a multiple of the burst size
mAudioStream->setBufferSizeInFrames(mAudioStream->getFramesPerBurst() * 2);
// Start the stream
result = mAudioStream->requestStart();
if (result != Result::OK){
LOGE("Failed to start stream. Error: %s", convertToText(result));
}
LOGD("After starting stream");
They are called appropriately to play with standard code (as per Google tutorials) at required times:
stdSN->setPlaying(true);
stdHT->setPlaying(true); //Nasty Sound
The audio callback is standard (as per Google tutorials):
DataCallbackResult SoundFunctions::onAudioReady(AudioStream *mAudioStream, void *audioData, int32_t numFrames) {
// Play the stream
mMixer.renderAudio(static_cast<int16_t*>(audioData), numFrames);
return DataCallbackResult::Continue;
}
The std_kit_sn.raw plays fine. But std_kit_ht.raw has a nasty distortion. Both play with low latency. Why is one playing fine and the other has a nasty distortion?
I loaded your sample project and I believe the distortion you hear is caused by clipping/wraparound during mixing of sounds.
The Mixer object from the sample is a summing mixer. It just adds the values of each track together and outputs the sum.
You need to add some code to reduce the volume of each track to avoid exceeding the limits of an int16_t (although you're welcome to file a bug on the oboe project and I'll try to add this in an upcoming version). If you exceed this limit you'll get wraparound which is causing the distortion.
Additionally, your app is hardcoded to run at 22050 frames/sec. This will result in sub-optimal latency across most mobile devices because the stream is forced to upsample to the audio device's native frame rate. A better approach would be to leave the sample rate undefined when opening the stream - this will give you the optimal frame rate for the current audio device - then use a resampler on your source files to supply audio at this frame rate.

audio latency issues

In the application which I want to create, I face some technical obstacles. I have two music tracks in the application. For example, a user imports the music background as a first track. The second path is a voice recorded by the user to the rhythm of the first track played by the speaker device (or headphones). At this moment we face latency. After recording and playing back in the app, the user hears the loss of synchronisation between tracks, which occurs because of the microphone and speaker latencies.
Firstly, I try to detect the delay by filtering the input sound. I use android’s AudioRecord class, and the method read(). This method fills my short array with audio data.
I found that the initial values of this array are zeros so I decided to cut them out before I will start to write them into the output stream.
So I consider those zeros as a „warmup” latency of the microphone. Is this approach correct? This operation gives some results, but it doesn’t resolve the problem, and at this stage, I’m far away from that.
But the worse case is with the delay between starting the speakers and playing the music. This delay I cannot filter or detect. I tried to create some calibration feature which counts the delay. I play a „beep” sound through the speakers, and when I start to play it, I also begin to measure time. Then, I start recording and listen for this sound being detected by the microphone. When I recognise this sound in the app, I stop measuring time. I repeat this process several times, and the final value is the average from those results. That is how I try to measure the latency of the device. Now, when I have this value, I can simply shift the second track backwards to achieve synchronisation of both records (I will lose some initial milliseconds of the recording, but I skip this case, for now, there are some possibilities to fix it).
I thought that this approach would resolve the problem, but it turned out this is not as simple as I thought. I found two issues here:
1. Delay while playing two tracks simultaneously
2. Random in device audio latency.
The first: I play two tracks using AudioTrack class and I run method play() like this:
val firstTrack = //creating a track
val secondTrack = //creating a track
firstTrack.play()
secondTrack.play()
This code causes delays at the stage of playing tracks. Now, I don’t even have to think about latency while recording; I cannot play two tracks simultaneously without delays. I tested this with some external audio file (not recorded in my app) - I’m starting the same audio file using the code above, and I can see a delay. I also tried it with MediaPlayer class, and I have the same results. In this case, I even try to play tracks when callback OnPreparedListener invoke:
val firstTrack = //AudioPlayer
val secondTrack = //AudioPlayer
second.setOnPreparedListener {
first.start()
second.start()
}
And it doesn’t help.
I know that there is one more class provided by Android called SoundPool. According to the documentation, it can be better with playing tracks simultaneously, but I can’t use it because it supports only small audio files and that can't limit me.
How can I resolve this problem? How can I start playing two tracks precisely at the same time?
The second: Audio latency is not deterministic - sometimes it is smaller, and sometimes it’s huge, and it’s out of my hands. So measuring device latency can help but again - it cannot resolve the problem.
To sum up: is there any solution, which can give me exact latency per device (or app session?) or other triggers which detect actual delay, to provide the best synchronisation while playback two tracks at the same time?
Thank you in advance!
Synchronising audio for karaoke apps is tough. The main issue you seem to be facing is variable latency in the output stream.
This is almost certainly caused by "warm up" latency: the time it takes from hitting "play" on your backing track to the first frame of audio data being rendered by the audio device (e.g. headphones). This can have large variance and is difficult to measure.
The first (and easiest) thing to try is to use MODE_STREAM when constructing your AudioTrack and prime it with bufferSizeInBytes of data prior to calling play (more here). This should result in lower, more consistent "warm up" latency.
A better way is to use the Android NDK to have a continuously running audio stream which is just outputting silence until the moment you hit play, then start sending audio frames immediately. The only latency you have here is the continuous output latency.
If you decide to go down this route I recommend taking a look at the Oboe library (full disclosure: I am one of the authors).
To answer one of your specific questions...
Is there a way to calculate the latency of the audio output stream programatically?
Yes. The easiest way to explain this is with a code sample (this is C++ for the AAudio API but the principle is the same using Java AudioTrack):
// Get the index and time that a known audio frame was presented for playing
int64_t existingFrameIndex;
int64_t existingFramePresentationTime;
AAudioStream_getTimestamp(stream, CLOCK_MONOTONIC, &existingFrameIndex, &existingFramePresentationTime);
// Get the write index for the next audio frame
int64_t writeIndex = AAudioStream_getFramesWritten(stream);
// Calculate the number of frames between our known frame and the write index
int64_t frameIndexDelta = writeIndex - existingFrameIndex;
// Calculate the time which the next frame will be presented
int64_t frameTimeDelta = (frameIndexDelta * NANOS_PER_SECOND) / sampleRate_;
int64_t nextFramePresentationTime = existingFramePresentationTime + frameTimeDelta;
// Assume that the next frame will be written into the stream at the current time
int64_t nextFrameWriteTime = get_time_nanoseconds(CLOCK_MONOTONIC);
// Calculate the latency
*latencyMillis = (double) (nextFramePresentationTime - nextFrameWriteTime) / NANOS_PER_MILLISECOND;
A caveat: This method relies on accurate timestamps being reported by the audio hardware. I know this works on Google Pixel devices but have heard reports that it isn't so accurate on other devices so YMMV.
Following the answer of donturner, here's a Java version (that also uses other methods depending on the SDK version)
/** The audio latency has not been estimated yet */
private static long AUDIO_LATENCY_NOT_ESTIMATED = Long.MIN_VALUE+1;
/** The audio latency default value if we cannot estimate it */
private static long DEFAULT_AUDIO_LATENCY = 100L * 1000L * 1000L; // 100ms
/**
* Estimate the audio latency
*
* Not accurate at all, depends on SDK version, etc. But that's the best
* we can do.
*/
private static void estimateAudioLatency(AudioTrack track, long audioFramesWritten) {
long estimatedAudioLatency = AUDIO_LATENCY_NOT_ESTIMATED;
// First method. SDK >= 19.
if (Build.VERSION.SDK_INT >= 19 && track != null) {
AudioTimestamp audioTimestamp = new AudioTimestamp();
if (track.getTimestamp(audioTimestamp)) {
// Calculate the number of frames between our known frame and the write index
long frameIndexDelta = audioFramesWritten - audioTimestamp.framePosition;
// Calculate the time which the next frame will be presented
long frameTimeDelta = _framesToNanoSeconds(frameIndexDelta);
long nextFramePresentationTime = audioTimestamp.nanoTime + frameTimeDelta;
// Assume that the next frame will be written at the current time
long nextFrameWriteTime = System.nanoTime();
// Calculate the latency
estimatedAudioLatency = nextFramePresentationTime - nextFrameWriteTime;
}
}
// Second method. SDK >= 18.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED && Build.VERSION.SDK_INT >= 18) {
Method getLatencyMethod;
try {
getLatencyMethod = AudioTrack.class.getMethod("getLatency", (Class<?>[]) null);
estimatedAudioLatency = (Integer) getLatencyMethod.invoke(track, (Object[]) null) * 1000000L;
} catch (Exception ignored) {}
}
// If no method has successfully gave us a value, let's try a third method
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
AudioManager audioManager = (AudioManager) CRT.getInstance().getSystemService(Context.AUDIO_SERVICE);
try {
Method getOutputLatencyMethod = audioManager.getClass().getMethod("getOutputLatency", int.class);
estimatedAudioLatency = (Integer) getOutputLatencyMethod.invoke(audioManager, AudioManager.STREAM_MUSIC) * 1000000L;
} catch (Exception ignored) {}
}
// No method gave us a value. Let's use a default value. Better than nothing.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
estimatedAudioLatency = DEFAULT_AUDIO_LATENCY;
}
return estimatedAudioLatency
}
private static long _framesToNanoSeconds(long frames) {
return frames * 1000000000L / SAMPLE_RATE;
}
The android MediaPlayer class is notoriously slow to begin audio playback, I experienced an issue in an app I was creating where there was a greater than one second delay to begin playing an audio clip. I resolved it by switching to ExoPlayer which resulted in the playback starting within 100ms. I've also read that ffmpeg has even faster start audio startup time than ExoPlayer but I haven't used it so I can't make any promises.

precision of Android MediaPlayer seekTo

I have a number of mp3 files that I use with Android MediaPlayer to play from certain offsets.
Using seekTo() seems to stop at correct location. player.getCurrrentPosition() returns the correct offset, but in some cases the real position is off for as much as 200 ms. The files are about 3 minutes worth of recording and the incorrect offsets seem to appear at the end. Of some of the files.
I have the same effect either trying with Android 4.0.3 device or 4.3 emulator.
Anybody has experience with "finetuning" MediaPlayer offsets? Any experience why MediaPlayer might not be working correctly with some files? They are all CBR, stereo, some have sampling frequency 22050, some 44100, different bitrates.
I'm setting the offsets from another program and saving to mp3 tags, then in case of doubt verifying manually using Audacity. Audacity agrees with my estimate of what the correct offset is, MediaPlayer seems to disagree.
I'm aware that I could use AudioTrack with raw sound files and have a better control, however it might be impractical as there are many mp3 files, so using raw sound data will make pretty large application or many large data files.
The code is nothing fancy:
player.seekTo(start);
player.start();
CountDownTimer timer = new CountDownTimer(length, 100) {
#Override
public void onTick(long millisUntilFinished) {
if (player!=null) setInt(R.id.nLocation, player.getCurrentPosition());
}
#Override
public void onFinish() {
if (player!=null) {
if (player.isPlaying()) {
player.pause();
}
setInt(R.id.nLocation, player.getCurrentPosition());
player.stop();
player.release();
player = null;
}
}
};
timer.start();
I did not manage to find the rule why the MediaPlayer interprets offset (seekTo) differently for a group of MP3 files. For example when creating a new MP3 file with the same parameters from Audacity+Lame (MPEG1, Layer III, 44100 Hz, 192 Kb/s) it worked perfectly.
However:
this can be reproduced - rip MP3 file using Windows Media Player, settings: MP3, 192 kb/s [added when edited]
I found the workaround that seems to work for any recording.
The background - in order to tell MediaPlayer to play from certain offset, I store certain data in MP3 tags. I use a separate program to set up the playback (in frames): Label A, start frame=1000, length=100 frames, Label B, start #1500 etc. Now when I need to play it back, I read the MP3 headers, determine the frame length, for example 26.12245 ms/frame and calculate the offset (1000 frames will be 26122 ms).
The workaround is to store in MP3 tag also the frame count and length in ms (or pass through again and count the frames). Then when start MediaPlayer, compare MediaPlayer.getDuration() (MediaPlayer estimate) with the duration stored in MP3 tag. Then adjust the frame size:
adjustedFrameSizeMs = realFrameSizeMs + (player.getDuration()-storedDurationMs)/storedframeCount;
In my case (for the files with incorrect offset) the adjusted frame length always was between 26.08 and 26.09 ms (instead of 26.12245).
I attempted to try see if this is because Android plays the recording quicker (so it estimates the "real time", not the time according to frame size and frame count). It seems that it really does plays quicker. But even quicker than its own estimate. For example a recording of about 1 hour:
my estimate: 2448 s
MediaPlayer: 2444 s (4 sec difference)
Audacity: 2442 s (here we are in disagreement)
Foobar: 2448 s (another witness that agrees with my estimate :-)
MediaPlayer, real play time: 2438 s
The real playtime was 6 s (0.25%) less than MediaPlayer own estimate. Another attempt on a different sample gave the same percentage difference. However the fact that Audacity and Foobar did not always agree with my estimates, does not let me put all the blame on MediaPlayer.

Increase MediaPlayer volume beyond 100%

Below code is working but not increasing the media player volume higher than the default max volume.Please help
AudioManager am =
(AudioManager) getSystemService(Context.AUDIO_SERVICE);
am.setStreamVolume(
AudioManager.STREAM_MUSIC,
am.getStreamMaxVolume(AudioManager.STREAM_MUSIC),
0);
The MediaPlayer class's setVolume() method only accepts scalars in the range [0.0, 1.0], but the classes deriving from AudioEffect can be used to amplify the MediaPlayer's audio session.
For example, LoudnessEnhancer amplifies samples by a gain specified in millibels (i.e. hundredths of decibels):
MediaPlayer player = new MediaPlayer();
player.setDataSource("https://www.example.org/song.mp3");
player.prepare();
// Increase amplitude by 20%.
double audioPct = 1.2;
int gainmB = (int) Math.round(Math.log10(audioPct) * 2000);
LoudnessEnhancer enhancer = new LoudnessEnhancer(player.getAudioSessionId());
enhancer.setTargetGain(gainmB);
It's unclear from the documentation, but it appeared to me that LoudnessEnhancer doesn't work properly with negative gains, so you may still need to use MediaPlayer's setVolume() method if you want to decrease the volume.
DynamicsProcessing provides multiple stages across multiple channels, including an input gain stage.
For increasing the volume of the device beyond the system volume u have to go in engineers mode.for that save below code.and paste it in number entering box In calling option it will directly redirect you to the engineers mode
*#*#3646633#*#*
By this you can access the system settings one thing make sure that don't use this without care it may affect your system performance.

Get media player to play first on right speaker and then on left speaker

I would like to play an audio file that starts on the left speaker and then switches to the right speaker.
I have tried doing something like this:
MediaPlayer mp = new MediaPlayer();
// Setup audio file
mp.start();
mp.setVolume(1.0F, 0F);
// Delay a second or two (I actually use a Handler and the postDelayed method)
mp.setVolume(0F, 1.0F);
but the sound comes through on both speakers the whole time.
How can I play audio in Android with either the left or right speaker muted (or at reduced volume)?
EDIT:
I got the correct behavior for a while while I was testing my app, but then it returned to what I described above with the exact same code base. Based on this, is there anything else I could check to find out what's going on?
One option would be
Start mediaplayer with setVolume(1.0F, 0F);
When you want to switch to other speaker, get current position of media player by using getCurrentPosition() method.
Then stop media player.
Then again start with setVolume(0F,1.0F);
Seek to the positin you got in 2nd step using seekTo() method
Done.
Overhead:This method may cause you some delay
It looks like you are doing it correctly according to the Android API http://developer.android.com/reference/android/media/MediaPlayer.html
public void setVolume (float leftVolume, float rightVolume)
Sets the volume on this player. This API is recommended for balancing the output of
audio streams within an application. Unless you are writing an application to control
user settings, this API should be used in preference to setStreamVolume(int, int, int)
which sets the volume of ALL streams of a particular type. Note that the passed volume
values are raw scalars in range 0.0 to 1.0. UI controls should be scaled logarithmically.
Parameters
leftVolume left volume scalar
rightVolume right volume scalar
My best advice is to try 0.0F instead of just 0F and then maybe trying to set the volume before you start playing the track then transition while it's playing.

Categories

Resources