OpenSL: What's causing static when loading wav files through AAsset_read? - android

I'm working on a native android project and trying to use OpenSL to play some audio effects. Working from the native audio sample project VisualGDB provides, I've written the code posted below.
Near the end, you can see I have commented a line that stores the contents of a variable called hello in the buffer to the destination. hello comes from the sample project, and contains about 700 lines of character bytes like this:
"\x02\x00\x01\x00\xff\xff\x09\x00\x0c\x00\x10\x00\x07\x00\x07\x00"
which make an audio file of someone saying "hello". When reading that byte data into the stream, my code works fine and I hear "hello" when I run the application. When I read from wav file to play the asset I want, however, I only hear static. The size of the data buffer is the same as the size of the file, so it appears it's being read in properly. The static plays for the duration of the wav file (or very close to it).
I really know nothing about data formats or audio programming. I've tried tweaking the format_pcm variables some with different enum values, but had no success. Using a tool called GSpot I found on The Internet, I know the following about the audio file I'm trying to play:
File Size: 557 KB (570,503 bytes) (this is the same size as the data buffer
AAsset_read returns
Codec: PCM Audio
Sample rate: 48000Hz
Bit rate: 1152 kb/s
Channels: 1
Any help or direction would be greatly appreciated.
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = { SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 1 };
SLDataFormat_PCM format_pcm;
format_pcm.formatType = SL_DATAFORMAT_PCM;
format_pcm.numChannels = 1;
format_pcm.samplesPerSec = SL_SAMPLINGRATE_48;// SL_SAMPLINGRATE_8;
format_pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_8; // SL_PCMSAMPLEFORMAT_FIXED_16;
format_pcm.containerSize = 16;
format_pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
SLDataSource audioSrc = { &loc_bufq, &format_pcm };
// configure audio sink
SLDataLocator_OutputMix loc_outmix = { SL_DATALOCATOR_OUTPUTMIX, manager->GetOutputMixObject() };
SLDataSink audioSnk = { &loc_outmix, NULL };
//create audio player
const SLInterfaceID ids[3] = { SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME };
const SLboolean req[3] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
SLEngineItf engineEngine = manager->GetEngine();
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*bqPlayerObject)->Realize(bqPlayerObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_PLAY, &bqPlayerPlay);
// get the buffer queue interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_BUFFERQUEUE,
&bqPlayerBufferQueue);
// register callback on the buffer queue
result = (*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback, NULL);
// get the effect send interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_EFFECTSEND,
&bqPlayerEffectSend);
// get the volume interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_VOLUME, &bqPlayerVolume);
// set the player's state to playing
result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING);
uint8* pOutBytes = nullptr;
uint32 outSize = 0;
result = MyFileManager::GetInstance()->OpenFile(m_strAbsolutePath, (void**)&pOutBytes, &outSize, true);
const char* filename = m_strAbsolutePath->GetUTF8String();
result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, pOutBytes, outSize);
// result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, hello, sizeof(hello));
if (SL_RESULT_SUCCESS != result) {
return JNI_FALSE;
}

Several things were to blame. The format of the wave files I was testing with was not what the specification described. There seemed to be a lot of empty data after the first chunk of header data. Also, the buffer that needs to be passed to the queue needs to be a char* of just the wav data, not the header. I'd wrongly assumed the queue parsed the header out.

Related

C++, Android NDK: How to save my raw audio data to file properly and load it again

I'm working on an Android app that plays back audio. To minimize latency I'm using C++ via JNI to play the app using the C++ library oboe.
Currently, before playback, the app has to decode the given file (e.g. an mp3), and then plays back the decoded raw audio stream. This leads to waiting time before playback starts if the file is bigger.
So I would like to do the decoding beforehand, save it, and when playback is requested just play thre decoded data from the saved file.
I have next to no knowledge of how to do proper file i/o in C++ and have a hard time wrapping my head around it. It is possible that my problem can be solved just with the right library, I'm not sure.
So currently I am saving my file like this:
bool Converter::doConversion(const std::string& fullPath, const std::string& name) {
// here I'm setting up the extractor and necessary inputs. Omitted since not relevant
// this is where the decoder is called to decode a file to raw audio
constexpr int kMaxCompressionRatio{12};
const long maximumDataSizeInBytes = kMaxCompressionRatio * (size) * sizeof(int16_t);
auto decodedData = new uint8_t[maximumDataSizeInBytes];
int64_t bytesDecoded = NDKExtractor::decode(*extractor, decodedData);
auto numSamples = bytesDecoded / sizeof(int16_t);
auto outputBuffer = std::make_unique<float[]>(numSamples);
// This block is necessary to get the correct format for oboe.
// The NDK decoder can only decode to int16, we need to convert to floats
oboe::convertPcm16ToFloat(
reinterpret_cast<int16_t *>(decodedData),
outputBuffer.get(),
bytesDecoded / sizeof(int16_t));
// This is how I currently save my outputBuffer to a file. This produces a file on the disc.
std::string outputSuffix = ".pcm";
std::string outputName = std::string(mFolder) + name + outputSuffix;
std::ofstream outfile(outputName.c_str(), std::ios::out | std::ios::binary);
outfile.write(reinterpret_cast<const char *>(&outputBuffer), sizeof outputBuffer);
return true;
}
So I believe I take my float array, convert it to a char array and save it. I am not certain this correct, but that is my best understanding of it.
There is a file afterwards, anyway.
Edit: As I found out when analyzing my saved file I only store 8 bytes.
Now how do I load this file again and restore the contents of my outputBuffer?
Currently I have this bit, which is clearly incomplete:
StorageDataSource *StorageDataSource::openPCM(const char *fileName, AudioProperties targetProperties) {
long bufferSize;
char * buffer;
std::ifstream stream(fileName, std::ios::in | std::ios::binary);
stream.seekg (0, std::ios::beg);
bufferSize = stream.tellg();
buffer = new char [bufferSize];
stream.read(buffer, bufferSize);
stream.close();
If this is correct, what do I have to do to restore the data as the original type? If I am doing it wrong, how does it work the right way?
I figured out how to do it thanks to #Michael's comments.
This is how I save my data now:
bool Converter::doConversion(const std::string& fullPath, const std::string& name) {
// here I'm setting up the extractor and necessary inputs. Omitted since not relevant
// this is where the decoder is called to decode a file to raw audio
constexpr int kMaxCompressionRatio{12};
const long maximumDataSizeInBytes = kMaxCompressionRatio * (size) * sizeof(int16_t);
auto decodedData = new uint8_t[maximumDataSizeInBytes];
int64_t bytesDecoded = NDKExtractor::decode(*extractor, decodedData);
auto numSamples = bytesDecoded / sizeof(int16_t);
// converting to float has moved to the reading function, so now i save decodedData directly.
std::string outputSuffix = ".pcm";
std::string outputName = std::string(mFolder) + name + outputSuffix;
std::ofstream outfile(outputName.c_str(), std::ios::out | std::ios::binary);
outfile.write((char*)decodedData, numSamples * sizeof (int16_t));
return true;
}
And this is how I read the stored file again:
long bufferSize;
char * inputBuffer;
std::ifstream stream;
stream.open(fileName, std::ifstream::in | std::ifstream::binary);
if (!stream.is_open()) {
// handle error
}
stream.seekg (0, std::ios::end); // seek to the end
bufferSize = stream.tellg(); // get size info, will be 0 without seeking to the end
stream.seekg (0, std::ios::beg); // seek to beginning
inputBuffer = new char [bufferSize];
stream.read(inputBuffer, bufferSize); // the actual reading into the buffer. would be null without seeking back to the beginning
stream.close();
// done reading the file.
auto numSamples = bufferSize / sizeof(int16_t); // calculate my number of samples, so the audio is correctly interpreted
auto outputBuffer = std::make_unique<float[]>(numSamples);
// the decoding bit now happens after the file is open. This avoids confusion
// The NDK decoder can only decode to int16, we need to convert to floats
oboe::convertPcm16ToFloat(
reinterpret_cast<int16_t *>(inputBuffer),
outputBuffer.get(),
bufferSize / sizeof(int16_t));
// here I continue working with my outputBuffer
The important bits of information/understanding C++ I didn't have or get were
a) the size of a pointer is not the same as the size of the data it
points to and
b) how seeking a stream works. I needed to put the
needle back to the start before I would find any data in my buffer.

How to play decoded in-memory PCM with Oboe properly?

I use oboe to play sounds in my ndk library, and I use OpenSL with Android extensions to decode wav files into PCM. Decoded signed 16-bit PCM are stored in-memory (std::forward_list<int16_t>), and then they are sent into the oboe stream via a callback. The sound that I can hear from my phone is alike original wav file in volume level, however, 'quality' of such a sound is not -- it bursting and crackle.
I am guessing that I send PCM in audio stream in wrong order or format (sampling rate ?). How can I can use OpenSL decoding with oboe audio stream ?
To decode files to PCM, I use AndroidSimpleBufferQueue as a sink, and AndroidFD with AAssetManager as a source:
// Loading asset
AAsset* asset = AAssetManager_open(manager, path, AASSET_MODE_UNKNOWN);
off_t start, length;
int fd = AAsset_openFileDescriptor(asset, &start, &length);
AAsset_close(asset);
// Creating audio source
SLDataLocator_AndroidFD loc_fd = { SL_DATALOCATOR_ANDROIDFD, fd, start, length };
SLDataFormat_MIME format_mime = { SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED };
SLDataSource audio_source = { &loc_fd, &format_mime };
// Creating audio sink
SLDataLocator_AndroidSimpleBufferQueue loc_bq = { SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 1 };
SLDataFormat_PCM pcm = {
.formatType = SL_DATAFORMAT_PCM,
.numChannels = 2,
.samplesPerSec = SL_SAMPLINGRATE_44_1,
.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16,
.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16,
.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT,
.endianness = SL_BYTEORDER_LITTLEENDIAN
};
SLDataSink sink = { &loc_bq, &pcm };
And then I register callback, enqueue buffers and move PCM from buffer to storage until it's done.
NOTE: wav audio file is also 2 channeled signed 16 bit 44.1Hz PCM
My oboe stream configuration is the same:
AudioStreamBuilder builder;
builder.setChannelCount(2);
builder.setSampleRate(44100);
builder.setCallback(this);
builder.setFormat(AudioFormat::I16);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
Audio rendering is working like that:
// Oboe stream callback
audio_engine::onAudioReady(AudioStream* self, void* audio_data, int32_t num_frames) {
auto stream = static_cast<int16_t*>(audio_data);
sound->render(stream, num_frames);
}
// Sound::render method
sound::render(int16_t* audio_data, int32_t num_frames) {
auto iter = pcm_data.begin();
std::advance(iter, cur_frame);
const int32_t rem_size = std::min(num_frames, size - cur_frame);
for(int32_t i = 0; i < rem_size; ++i, std::next(iter), ++cur_frame) {
audio_data[i] += *iter;
}
}
It looks like your render() method is confusing samples and frames.
A frame is a set of simultaneous samples.
In a stereo stream, each frame has TWO samples.
I think your iterator works on a sample basis. In other words next(iter) will advance to the next sample, not the next frame. Try this (untested) code.
sound::render(int16_t* audio_data, int32_t num_frames) {
auto iter = pcm_data.begin();
const int samples_per_frame = 2; // stereo
std::advance(iter, cur_sample);
const int32_t num_samples = std::min(num_frames * samples_per_frame,
total_samples - cur_sample);
for(int32_t i = 0; i < num_samples; ++i, std::next(iter), ++cur_sample) {
audio_data[i] += *iter;
}
}
In short: essentially, I was experiencing an underrun, because of usage of std::forward_list to store PCM. In such a case (using iterators to retrieve PCM), one has to use a container whose iterator implements LegacyRandomAccessIterator (e.g. std::vector).
I was sure that the linear complexity of methods std::advance and std::next doesn't make any difference there in my sound::render method. However, when I was trying to use raw pointers and pointer arithmetic (thus, constant complexity) with debugging methods that were suggested in the comments (Extracting PCM from WAV with Audacity, then loading this asset with AAssetManager directly into memory), I realized, that amount of "corruption" of output sound was directly proportional to the position argument in std::advance(iter, position) in render method.
So, if the amount of sound corruption was directly proportional to the complexity of std::advance (and also std::next), then I have to make the complexity constant -- by using std::vector as an container. And using an answer from #philburk, I got this as a working result:
class sound {
private:
const int samples_per_frame = 2; // stereo
std::vector<int16_t> pcm_data;
...
public:
render(int16_t* audio_data, int32_t num_frames) {
auto iter = std::next(pcm_data.begin(), cur_sample);
const int32_t s = std::min(num_frames * samples_per_frame,
total_samples - cur_sample);
for(int32_t i = 0; i < s; ++i, std::advance(iter, 1), ++cur_sample) {
audio_data[i] += *iter;
}
}
}

Android: Read audio data from file uri

I want to analyse an audio file (mp3 in particular) which the user can select and determine what notes are played, when they're player and with what frequency.
I already have some working code for my computer, but I want to be able to use this on my phone as well.
In order to do this however, I need access to the bytes of the audio file. On my PC I could just open a stream and use AudioFormat to decode it and then read() the bytes frame by frame.
Looking at the Android Developer Forums I can only find classes and examples for playing a file (without access to the bytes) or recording to a file (I want to read from a file).
I'm pretty confident that I can set up a file chooser, but once I have the Uri from that, I don't know how to get a stream or the bytes.
Any help would be much appreciated :)
Edit: Is a similar solution to this possible? Android - Read a File
I don't know if I could decode the audio file that way or if there would be any problems with the Android API...
So I solved it in the following way:
Get an InputStream with
final InputStream inputStream = getContentResolver().openInputStream(selectedUri);
Then pass it in this function and decode it using classes from JLayer:
private synchronized void decode(InputStream in)
throws BitstreamException, DecoderException {
ArrayList<Short> output = new ArrayList<>(1024);
Bitstream bitstream = new Bitstream(in);
Decoder decoder = new Decoder();
float total_ms = 0f;
float nextNotify = -1f;
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (total_ms > nextNotify) {
mListener.OnDecodeUpdate((int) total_ms);
nextNotify += 500f;
}
if (frameHeader == null) {
done = true;
} else {
total_ms += frameHeader.ms_per_frame();
SampleBuffer buffer = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream); // CPU intense
if (buffer.getSampleFrequency() != 44100 || buffer.getChannelCount() != 2) {
throw new DecoderException("mono or non-44100 MP3 not supported", null);
}
short[] pcm = buffer.getBuffer();
for (int i = 0; i < pcm.length-1; i += 2) {
short l = pcm[i];
short r = pcm[i+1];
short mono = (short) ((l + r) / 2f);
output.add(mono); // RAM intense
}
}
bitstream.closeFrame();
}
bitstream.close();
mListener.OnDecodeComplete(output);
}
The full project (in case you want to look up the particulars) can be found here:
https://github.com/S7uXN37/MusicInterpreterStudio/

Playing PCM WAVE sound from memory with OpenSL on Android

I'm trying to set up OpenSL AudioPlayer to use memory I've allocated to playback a wav file. I want to do this so I can have multiple AudioPlayers that share the same data and conserve memory.
I've tried to give openSL the entire file and tell it that it is a WAVE with format_mime
SLDataLocator_Address loc_fd = {SL_DATALOCATOR_ADDRESS, data, size};
SLDataFormat_MIME format_mime = { SL_DATAFORMAT_MIME, (SLchar*)"audio/x-wav",SL_CONTAINERTYPE_WAV};
SLDataSource audioSrc = { &loc_fd, &format_mime };
// configure audio sink
SLDataLocator_OutputMix loc_outmix = { SL_DATALOCATOR_OUTPUTMIX,outputMixObject };
SLDataSink audioSnk = { &loc_outmix, 0 };
// create audio player
const SLInterfaceID ids[2] = { SL_IID_SEEK, SL_IID_PLAYBACKRATE };
const SLboolean req[2] = { SL_BOOLEAN_FALSE, SL_BOOLEAN_FALSE };
result = (*engineEngine)->CreateAudioPlayer(engineEngine,&uriPlayerObject[cntSOUND],&audioSrc, &audioSnk, 0, ids, req);
and I have parsed the WAVE data myself and loaded format_pcm
SLDataFormat_PCM format_pcm;
format_pcm.formatType = SL_DATAFORMAT_PCM;
char* wavParser = isWAVE(data);
if(wavParser == NULL)
{
Log("NOT A WAVE!");
return -1;
}
char* fmtChunk = getChunk("fmt ", data, size);
parsefmtChunk(fmtChunk, &format_pcm);
char* dataChunk = getChunk("data",data, size);
dataChunk += 4;
unsigned int dataSize = *((unsigned int*)dataChunk);
dataChunk += 4;
format_pcm.channelMask = 0;
format_pcm.containerSize = 16;
format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
loc_fd.pAddress = dataChunk;
loc_fd.length = dataSize;
The parsefmtChunk function is
void parsefmtChunk(char* fmtchunk, SLDataFormat_PCM* pcm)
{
char* data = fmtchunk + 8;
unsigned short audioFormat = *((unsigned short*)data);
if(audioFormat != 1)
{
Log("Not PCM!");
Log("Reached Line:%d in File %s", __LINE__, __FILE__);
return;
}
data += 2;
pcm->numChannels = *((unsigned short*)data);
data += 2;
pcm->samplesPerSec = *((unsigned int*)data);
data += 4;
//Byte Rate
data += 4;
//Block Align
data += 2;
//BitsPerSample
pcm->bitsPerSample = *((unsigned short*)data);
(Are Byte Rate and Block Align supposed to be used somehow to fill out the pcm struct?)
but whenever I create the audioplayer I get SL_RESULT_CONTENT_UNSUPPORTED
This is what I log from my parsefmt function
Channels:2
samplesPerSec:44100
bitsPerSample:16
from android-ndk-r8b/docs/opensles/index.html
PCM data format
The PCM data format can be used with buffer queues only.
So SLDataFormat_PCM CANNOT be used with SLDataLocator_Address like I assumed.
I can do what I want with a Buffer Queue instead by using just one big queue like so
bufferqueueitf->Enqueue(bufferqueueitf,dataChunk,dataSize);
Have you tried this?
SLDataFormat_MIME format_mime = {SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED};
The Android implementation of OpenSL ES isn't totally compliant and http://mobilepearls.com/labs/native-android-api/ndk/docs/opensles/ recommends the following:
The Android implementation of OpenSL ES requires that mimeType be initialized to either NULL or a valid UTF-8 string, and that containerType be initialized to a valid value. In the absence of other considerations, such as portability to other implementations, or content format which cannot be identified by header, we recommend that you set the mimeType to NULL and containerType to SL_CONTAINERTYPE_UNSPECIFIED.
Also, make sure you're giving it a valid URI.

Is it possible to get a byte buffer directly from an audio asset in OpenSL ES (for Android)?

I would like to get a byte buffer from an audio asset using the OpenSL ES FileDescriptor object, so I can enqueue it repeatedly to a SimpleBufferQueue, instead of using SL interfaces to play/stop/seek the file.
There are three main reasons why I would like to manage the sample bytes directly:
OpenSL uses an AudioTrack layer to play/stop/etc for Player Objects. This does not only introduce unwanted overhead, but it also has several bugs, and rapid starts/stops of the player cause lots of problems.
I need to manipulate the byte buffer directly for custom DSP effects.
The clips I'm going to be playing are small, and can all be loaded into memory to avoid file I/O overhead. Plus, enqueueing my own buffers will allow me to reduce latency by writing 0's to the output sink, and simply switching to sample bytes when they are playing, rather than STOPPING, PAUSING and PLAYING the AudioTrack.
Okay, so justifications complete - here's what I've tried - I have a Sample struct which contains, essentially, an input and output track, and a byte array to hold the samples. The input is my FileDescriptor player, and the output is a SimpleBufferQueue object. Here's my struct:
typedef struct Sample_ {
// buffer to hold all samples
short *buffer;
int totalSamples;
SLObjectItf fdPlayerObject;
// file descriptor player interfaces
SLPlayItf fdPlayerPlay;
SLSeekItf fdPlayerSeek;
SLMuteSoloItf fdPlayerMuteSolo;
SLVolumeItf fdPlayerVolume;
SLAndroidSimpleBufferQueueItf fdBufferQueue;
SLObjectItf outputPlayerObject;
SLPlayItf outputPlayerPlay;
// output buffer interfaces
SLAndroidSimpleBufferQueueItf outputBufferQueue;
} Sample;
after initializing a file player fdPlayerObject, and malloc-ing memory for my byte buffer with
sample->buffer = malloc(sizeof(short)*sample->totalSamples);
I'm getting its BufferQueue interface with
// get the buffer queue interface
result = (*(sample->fdPlayerObject))->GetInterface(sample->fdPlayerObject, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &(sample->fdBufferQueue));
Then I instantiate an output player:
// create audio player for output buffer queue
const SLInterfaceID ids1[] = {SL_IID_ANDROIDSIMPLEBUFFERQUEUE};
const SLboolean req1[] = {SL_BOOLEAN_TRUE};
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &(sample->outputPlayerObject), &outputAudioSrc, &audioSnk,
1, ids1, req1);
// realize the output player
result = (*(sample->outputPlayerObject))->Realize(sample->outputPlayerObject, SL_BOOLEAN_FALSE);
assert(result == SL_RESULT_SUCCESS);
// get the play interface
result = (*(sample->outputPlayerObject))->GetInterface(sample->outputPlayerObject, SL_IID_PLAY, &(sample->outputPlayerPlay));
assert(result == SL_RESULT_SUCCESS);
// get the buffer queue interface for output
result = (*(sample->outputPlayerObject))->GetInterface(sample->outputPlayerObject, SL_IID_ANDROIDSIMPLEBUFFERQUEUE,
&(sample->outputBufferQueue));
assert(result == SL_RESULT_SUCCESS);
// set the player's state to playing
result = (*(sample->outputPlayerPlay))->SetPlayState(sample->outputPlayerPlay, SL_PLAYSTATE_PLAYING);
assert(result == SL_RESULT_SUCCESS);
When I want to play the sample, I am using:
Sample *sample = &samples[sampleNum];
// THIS WORKS FOR SIMPLY PLAYING THE SAMPLE, BUT I WANT THE BUFFER DIRECTLY
// if (sample->fdPlayerPlay != NULL) {
// // set the player's state to playing
// (*(sample->fdPlayerPlay))->SetPlayState(sample->fdPlayerPlay, SL_PLAYSTATE_PLAYING);
// }
// fill buffer with the samples from the file descriptor
(*(sample->fdBufferQueue))->Enqueue(sample->fdBufferQueue, sample->buffer,sample->totalSamples*sizeof(short));
// write the buffer to the outputBufferQueue, which is already playing
(*(sample->outputBufferQueue))->Enqueue(sample->outputBufferQueue, sample->buffer, sample->totalSamples*sizeof(short));
However, this causes my app to freeze and shut down. Something is wrong here.
Also, I would prefer to not get the samples from the File Descriptor's BufferQueue each time. Instead, I'd like to permanently store it in a byte array, and Enqueue that to the output whenever I like.
Decoding to PCM is available at API level 14 and higher.
When you create decoder player you need set Android simple buffer queue as the data sink:
// For init use something like this:
SLDataLocator_AndroidFD locatorIn = {SL_DATALOCATOR_ANDROIDFD, decriptor, start, length};
SLDataFormat_MIME dataFormat = {SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED};
SLDataSource audioSrc = {&locatorIn, &dataFormat};
SLDataLocator_AndroidSimpleBufferQueue loc_bq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataSink audioSnk = { &loc_bq, NULL };
const SLInterfaceID ids[2] = {SL_IID_PLAY, SL_IID_ANDROIDSIMPLEBUFFERQUEUE};
const SLboolean req[2] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
SLresult result = (*engineEngine)->CreateAudioPlayer(engineEngine, &(sample->fdPlayerObject), &outputAudioSrc, &audioSnk, 2, ids1, req1);
For decoder queue you need enqueue a set of empty buffers to the Android simple buffer queue, which will be filled with PCM data.
Also you need register a callback handler with the decoder queue which will be called when PCM data will be ready. The callback handler should process the PCM data, re-enqueue the now-empty buffer, and then return. The application is responsible for keeping track of decoded buffers; the callback parameter list does not include sufficient information to indicate which buffer was filled or which buffer to enqueue next.
Decode to PCM supports pause and initial seek. Volume control, effects, looping, and playback rate are not supported.
Read Decode audio to PCM from OpenSL ES for Android for more details.

Categories

Resources