Audio getting from ByteArray in android - android

I am trying getting audio from NV21 byte array,When I run below code I am getting error this line inputBuffer.put(input); name of error**"java.nio.BufferOverflowException"** how can I get audio from byte array?
I guess Ia ma getiing error from ByteBuffer but I cannot solve it, I should increase inputBuffer but how I can't find. Please Help me.
public void init(){
//initialize Audio Encoder
File audio_file = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/", "audio_encoded.aac");
try {
outputStream = new BufferedOutputStream(new FileOutputStream(audio_file));
Log.e("AudioEncoder", "outputStream initialized");
} catch (Exception e){
e.printStackTrace();
}
try {
audioCodec = MediaCodec.createEncoderByType(audioType);
} catch (IOException e) {
e.printStackTrace();
}
final int kSampleRates[] = { 8000, 11025, 22050, 44100, 48000 };
final int kBitRates[] = { 64000, 128000 };
MediaFormat audioFormat = MediaFormat.createAudioFormat(audioType,kSampleRates[3],2);
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, kBitRates[1]);
audioCodec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
audioCodec.start();
}
}
// called AudioRecord's read
public synchronized void audioEncoder(byte[] input) {
Log.e("AudioEncoder", input.length + " is coming");
try {
ByteBuffer[] inputBuffers = audioCodec.getInputBuffers();
ByteBuffer[] outputBuffers = audioCodec.getOutputBuffers();
int inputBufferIndex = audioCodec.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(input);
audioCodec.queueInputBuffer(inputBufferIndex, 0, input.length, 0, 0);
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = audioCodec.dequeueOutputBuffer(bufferInfo,0);
//Without ADTS header
while (outputBufferIndex >= 0) {
ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
byte[] outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
outputStream.write(outData, 0, outData.length);
Log.e("AudioEncoder", outData.length + " bytes written");
audioCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = audioCodec.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (Throwable t) {
t.printStackTrace();
}
}
private CameraProxy.CameraDataCallBack callBack = new CameraProxy.CameraDataCallBack() {
#Override
public void onDataBack(byte[] data, long length) {
// TODO Auto-generated method stub
Log.i(TAG, "length . " + length);
//audio play
int min_buffer_size = AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormats);
audioRecord = new AudioRecord(audioSource,sampleRateInHz,channelConfig,audioFormats,min_buffer_size);
audioRecord.read(data,0,data.length);
audioEncoder(data);
}
}

You can try;
audioFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, inputSize);
inputSize should set according to input size. Then inputBuffers capacity will be enough.

Audio encoding offers a little more flexibility than video, in that it's easier to change the size of the input data.
In this case, I recommend checking the size of inputBuffer (inputBuffer.remaining()) and supplying exactly that amount of audio data. That means if data is too big, only put into the inputBuffer what will fit, and save the rest for the next input buffer. And if data is too small, buffer it someplace temporarily, until you get more audio data (enough to fill the entire inputBuffer). That's the way this codec is intended to be used.
As an aside: Your code appears to have some problems with confusing video data and audio data, the lifecycle of the AudioRecord object, and the proper threading arrangement for capturing video and audio simultaneously. Here are some hints:
NV21 is a picture format, not audio.
onDataBack() is giving you a picture -- then you're overwriting it with a bit of audio
In onDataBack(), data is going to be HUGE -- b/c it contains a picture. And you're trying to read in the whole thing with audio data. Depending on how the AudioRecord is configured, it may only read a few bytes. You should check the return value. From the docs:
Data should be read from the audio hardware in chunks of sizes inferior to the total recording buffer size.
If you are in need of a better piece of sample code, this project looks pretty decent.

Related

How to play raw NAL units in Android MediaCodec

I initially tried How to play raw NAL units in Andoid exoplayer? but I noticed I'm gonna have to do things in low level.
I've found this simple MediaCodec example. As you can see, it's a thread that plays a file on a surface passed to it.
Notice the lines
mExtractor = new MediaExtractor();
mExtractor.setDataSource(filePath);
It looks like that I should create my own MediaExtractor which, instead of extracting the video units from a file, it'll use the h264 NAL units from a buffer I'll provide.
I can then call mExtractor.setDataSource(MediaDataSource dataSource), see MediaDataSource
It has readAt(long position, byte[] buffer, int offset, int size)
This is where it reads the NAL units. However, how should I pass them? I have no information on the structure of the buffer that needs to be read.
Should I pass a byte[] buffer with the NAL units in it, and if so, in which format? What is the offset for? If it's a buffer, shouldn't I just erase the lines that were read and thus have no offset or size?
By the way, the h264 NAL units are streaming ones, they come from RTP packets, not files. I'm gonna get them through C++ and store them on a buffer an try to pass to the mediaExtractor.
UPDATE:
I've been reading a lot about MediaCodec and I think I understand it better. According to https://developer.android.com/reference/android/media/MediaCodec, everything relies on something of this type:
MediaCodec codec = MediaCodec.createByCodecName(name);
MediaFormat mOutputFormat; // member variable
codec.setCallback(new MediaCodec.Callback() {
#Override
void onInputBufferAvailable(MediaCodec mc, int inputBufferId) {
ByteBuffer inputBuffer = codec.getInputBuffer(inputBufferId);
// fill inputBuffer with valid data
…
codec.queueInputBuffer(inputBufferId, …);
}
#Override
void onOutputBufferAvailable(MediaCodec mc, int outputBufferId, …) {
ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId);
MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A
// bufferFormat is equivalent to mOutputFormat
// outputBuffer is ready to be processed or rendered.
…
codec.releaseOutputBuffer(outputBufferId, …);
}
#Override
void onOutputFormatChanged(MediaCodec mc, MediaFormat format) {
// Subsequent data will conform to new format.
// Can ignore if using getOutputFormat(outputBufferId)
mOutputFormat = format; // option B
}
#Override
void onError(…) {
…
}
});
codec.configure(format, …);
mOutputFormat = codec.getOutputFormat(); // option B
codec.start();
// wait for processing to complete
codec.stop();
codec.release();
As you can see, I can pass input buffers and get decoded output buffers. The exact byte formats are still a mystery, but I think that's how it works. Also according to the same article, the usage of ByteBuffers is slow, and Surfaces are preferred. They consume the output buffers automatically. Although there's no tutorial on how to do it, there's a section in the article that says it's almost identical, so I guess I just need to add the additional lines
codec.setInputSurface(Surface inputSurface)
codec.setOutputSurface(Surface outputSurface)
Where inputSurface and outputSurface are Surfaces which I pass to a MediaPlayer which I use (how) to display the video in an activity. And the output buffers will simply not come on onOutputBufferAvailable (because the surface consumes them first), neither onInputBufferAvailable.
So the questions now are: how exactly do I construct a Surface which contains the video buffer, and how do I display a MediaPlayer into an activity
For output I can simply create a Surface and pass to a MediaPlayer and MediaCodec, but what about input? Do I need ByteBuffer for the input anyways, and Surface would just be for using other outputs as inputs?
you first need to remove the NAL units , and feed the raw H264 bytes into this method, how ever in your case your reading from the file , so no need to remove any thing since your not using packets , just feed the data bytes to this method:
rivate void initDecoder(){
try {
writeHeader = true;
if(mDecodeMediaCodec != null){
try{
mDecodeMediaCodec.stop();
}catch (Exception e){}
try{
mDecodeMediaCodec.release();
}catch (Exception e){}
}
mDecodeMediaCodec = MediaCodec.createDecoderByType(MIME_TYPE);
//MIME_TYPE = video/avc in your case
mDecodeMediaCodec.configure(format,mSurfaceView.getHolder().getSurface(),
null,
0);
mDecodeMediaCodec.start();
mDecodeInputBuffers = mDecodeMediaCodec.getInputBuffers();
} catch (IOException e) {
e.printStackTrace();
mLatch.trigger();
}
}
private void decode(byte[] data){
try {
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int inputBufferIndex = mDecodeMediaCodec.dequeueInputBuffer(1000);//
if (inputBufferIndex >= 0) {
ByteBuffer buffer = mDecodeInputBuffers[inputBufferIndex];
buffer.clear();
buffer.put(data);
mDecodeMediaCodec.queueInputBuffer(inputBufferIndex,
0,
data.length,
packet.sequence / 1000,
0);
data = null;
//decodeDataBuffer.clear();
//decodeDataBuffer = null;
}
int outputBufferIndex = mDecodeMediaCodec.dequeueOutputBuffer(info,
1000);
do {
if (outputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
//no output available yet
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
//encodeOutputBuffers = mDecodeMediaCodec.getOutputBuffers();
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
format = mDecodeMediaCodec.getOutputFormat();
//mediaformat changed
} else if (outputBufferIndex < 0) {
//unexpected result from encoder.dequeueOutputBuffer
} else {
mDecodeMediaCodec.releaseOutputBuffer(outputBufferIndex,
true);
outputBufferIndex = mDecodeMediaCodec.dequeueOutputBuffer(info,
0);
}
} while (outputBufferIndex > 0);
}
please dont forget that iFrame (the first frame bytes) contains sensitive data and MUST be fed to the decoder first

MediaExtractor.advance() - End of stream signalled earlier than expected

I have problem with end of stream signalled earlier than expected (way before file ends) by the advance() method of MediaExtractor class. According to Google reference the advance() method could work wrong when using a local file (and this is my case - filePath ponits to a local file):
When extracting a local file, the behaviors of advance() and
readSampleData(ByteBuffer, int) are undefined in presence of
concurrent writes to the same local file:
Unfortunately there is no single word about using MediaExtractor without advance() method. How to move to the next sample? If there is no way to do it then I'd like to know how to feed inputBuffer without using MediaExtractor.
A fragment of my code below:
if(Build.VERSION.SDK_INT >= 21 ) {
try {
codec = MediaCodec.createByCodecName("OMX.google.mp3.decoder");
} catch (IOException e) {
e.printStackTrace();
}
final MediaExtractor extractor = new MediaExtractor();
try {
extractor.setDataSource(filePath); //local file
Log.i("filePath", String.valueOf(filePath));
extractor.selectTrack(0);
} catch (IOException e) {
e.printStackTrace();
}
codec.setCallback(new MediaCodec.Callback() {
#RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
#Override
public void onInputBufferAvailable(MediaCodec mc, int inputBufferId) {
ByteBuffer inputBuffer = codec.getInputBuffer(inputBufferId);
// fill inputBuffer with valid data
int sampleSize = extractor.readSampleData(inputBuffer,0);
if(extractor.advance) {
codec.queueInputBuffer(inputBufferId, 0, sampleSize, 0, 0);
} else {
// EOS
codec.queueInputBuffer(inputBufferId, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
codec.stop();
codec.release();
}
}
// more callbacks
};
}
In my samples MediaExtractor worked correctly when I used another construction of statements. I did not check EOS with extractor.advance() like you. Try how other samples offer:
int chunkSize = extractor.readSampleData(inputBuffer, 0);
if (chunkSize < 0) {
// End of stream -- send empty frame with EOS flag set.
codec.queueInputBuffer(inputBufferId, 0, 0, 0L,
MediaCodec.BUFFER_FLAG_END_OF_STREAM);
} else {
// No EOS -- continue
long presentationTimeUs = extractor.getSampleTime();
codec.queueInputBuffer(inputBufferId, 0, chunkSize,
presentationTimeUs, 0 /*flags*/);
extractor.advance();
}
BTW, the guide of MediaExtractor tells about such usage.
Fadden has MoviePlayer in Grafika with MediaExtractor. But it feeds decoder without MediaCodec.Callback.

Android MediaCodec flush after MediaExtractor seekTo and AudioTrack flush behaviour unexpectedly bad

I am using the combination of AudioTrack, MediaCodec and MediaExtractor to decode and play music.
As per the document,
In order to start decoding data that's not adjacent to previously submitted data (i.e. after a seek) it is necessary to flush() the decoder. Any input or output buffers the client may own at the point of the flush are immediately revoked, i.e. after a call to flush() the client does not own any buffers anymore.
I am calling flush after seek, so that i should also call mAudioTrack.flush();
Though flush is called, audioTrack plays some part of previously written data and continues from newly written data.
Without calling decoder.flush, possibly hearing noticeable glitches in audio playback. So how to achieve this instantaneous flush and continue playback of newly written data?
Code Snippet:
Updated code
do {
int codedbufferIndex = decoder.dequeueInputBuffer(1000);
if (codedbufferIndex >= 0) {
ByteBuffer codecInput = inputBuffers[codedbufferIndex];
synchronized (playerState) {
if (seek) {
extractor.seekTo(seekTo,
MediaExtractor.SEEK_TO_CLOSEST_SYNC);
// audioTrack.pause();
// audioTrack.stop();
// audioTrack.flush();
decoder.flush();
seek = false;
// audioTrack.play();
continue;
}
}
read = extractor.readSampleData(codecInput, offset);
if (read < 0) {
if (extractor.hasCacheReachedEndOfStream())
Log.e(TAG, "extractor.hasCacheReachedEndOfStream()");
break;
}
presentationTimeUs = extractor.getSampleTime();
decoder.queueInputBuffer(codedbufferIndex, offset, read,
presentationTimeUs, (read > 0) ? 0
: MediaCodec.BUFFER_FLAG_END_OF_STREAM);
int decodedDataBufIndex = decoder.dequeueOutputBuffer(info,
2000);
if (decodedDataBufIndex >= 0) {
ByteBuffer codecOutput = outputBuffers[decodedDataBufIndex];
byte[] atInput = new byte[info.size];
codecOutput.get(atInput);
codecOutput.clear();
decoder.releaseOutputBuffer(decodedDataBufIndex, false);
if(info.offset != 0){
Log.e(TAG,"info.offset = "+String.valueOf(info.offset));
}
audioTrack.write(atInput, /*0*/info.offset, info.size);
extractor.advance();
} else if (decodedDataBufIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat newFormat = decoder.getOutputFormat();
Log.e(TAG,
"newFormat: "
+ newFormat
.getString(MediaFormat.KEY_MIME));
Log.e(TAG,
"newFormat: "
+ String.valueOf(newFormat
.getInteger(MediaFormat.KEY_SAMPLE_RATE)));
Log.e(TAG, "Inside INFO_OUTPUT_FORMAT_CHANGED");
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate, AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT, 32768,
AudioTrack.MODE_STREAM);
audioTrack.play();
}
} else {
Log.e(TAG,"codedbufferIndex is "+String.valueOf(codedbufferIndex));
}
} while (read >= 0);
AudioTrack.flush() is no-op if not paused or stopped.
No-op if not stopped or paused, or if the track's creation mode is not
MODE_STREAM.
One solution would be to lower the buffer of the AudioTrack, so the playing of the previously submitted bytes is not perceptible. In my app, I use a rather large size of 32K and it isn't bothersome.

How to encode to the AAC by MediaCodec in Android?

I want to convert the file from .mp3 to AAC and I reference the source code of the Spydroid APP.
It convert the audio data from Microphone audio source to AAC.
I try to modify the code and change the audio source from Microphone audio source to local mp3 file.
But I don't know how to pushing raw audio to the decoder...
Does somebody can help me and give me some suggestion ?
The code of encode is like the following:
protected void encodeWithMediaCodec() throws IOException {
final int bufferSize = AudioRecord.getMinBufferSize(mQuality.samplingRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT)*2;
((AACLATMPacketizer)mPacketizer).setSamplingRate(mQuality.samplingRate);
//The music file in my phone , and I want to push it to the
long file_size = 0;
string path = "/storage/sdcard1/music/test.mp3"
File audio = new File(path );
mMediaCodec = MediaCodec.createEncoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_BIT_RATE, mQuality.bitRate);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, mQuality.samplingRate);
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, mBuffer_Size);
mMediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mMediaCodec.start();
final MediaCodecInputStream inputStream = new MediaCodecInputStream(mMediaCodec);
final ByteBuffer[] inputBuffers = mMediaCodec.getInputBuffers();
mThread = new Thread(new Runnable() {
#Override
public void run() {
int len = 0, bufferIndex = 0;
try {
while (!Thread.interrupted()) {
bufferIndex = mMediaCodec.dequeueInputBuffer(10000);
if (bufferIndex>=0) {
inputBuffers[bufferIndex].clear();
len = audio.length();
if (len == AudioRecord.ERROR_INVALID_OPERATION || len == AudioRecord.ERROR_BAD_VALUE) {
Log.e(TAG,"An error occured with the AudioRecord API !");
} else {
Log.v(TAG,"Pushing raw audio to the decoder: len="+len+" bs: "+inputBuffers[bufferIndex].capacity());
mMediaCodec.queueInputBuffer(bufferIndex, 0, len, System.nanoTime()/1000, 0);
}
}
}
} catch (RuntimeException e) {
e.printStackTrace();
}
}
});
mThread.start();
// The packetizer encapsulates this stream in an RTP stream and send it over the network
mPacketizer.setDestination(mDestination, mRtpPort, mRtcpPort);
mPacketizer.setInputStream(inputStream);
mPacketizer.start();
mStreaming = true;
}
It has error log like the following:
V/AACStream(25015): Pushing raw audio to the decoder: len=11952883 bs: 4096
W/System.err(25015): java.lang.IllegalStateException
W/System.err(25015): at android.media.MediaCodec.queueInputBuffer(Native Method)
W/System.err(25015): at net.majorkernelpanic.streaming.audio.AACStream$1.run(AACStream.java:258)
W/System.err(25015): at java.lang.Thread.run(Thread.java:856)
W/InputMethodManagerService( 574): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy#42afeb98 attribute=null
And the error of the code is at the
mMediaCodec.queueInputBuffer(bufferIndex, 0, len, System.nanoTime()/1000, 0);
I think the error is at the len , but I don't know how to fix it...
And it seems does not put the music file to the encoder...And I have two question
Question 1:
How to pushing the audio file to the MediaCodec?
Did I missing something at the code ?
Question 2:
How to make sure the data has been encode to the AAC ??

Android, use Mediacodec with libstreaming

I've a problem with this library
https://github.com/fyhertz/libstreaming
it allows to send via wireless the streaming of photocamera, it use 3 methods: two with mediacodec and one with mediarecorder.
I would like to modify it, and I have to use only the mediacodec;however first of all I tried the code of the example 2 of the library, but I've always found the same error:
the log tell me that the device can use the mediacodec, it set the encoder and when it test the decoder it fall and the buffer is filled with -1.
This is the method in the EncoderDebugger class where the exception occurs, some kind soul can help me please?
private long decode(boolean withPrefix) {
int n =3, i = 0, j = 0;
long elapsed = 0, now = timestamp();
int decInputIndex = 0, decOutputIndex = 0;
ByteBuffer[] decInputBuffers = mDecoder.getInputBuffers();
ByteBuffer[] decOutputBuffers = mDecoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
while (elapsed<3000000) {
// Feeds the decoder with a NAL unit
if (i<NB_ENCODED) {
decInputIndex = mDecoder.dequeueInputBuffer(1000000/FRAMERATE);
if (decInputIndex>=0) {
int l1 = decInputBuffers[decInputIndex].capacity();
int l2 = mVideo[i].length;
decInputBuffers[decInputIndex].clear();
if ((withPrefix && hasPrefix(mVideo[i])) || (!withPrefix && !hasPrefix(mVideo[i]))) {
check(l1>=l2, "The decoder input buffer is not big enough (nal="+l2+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (withPrefix && !hasPrefix(mVideo[i])) {
check(l1>=l2+4, "The decoder input buffer is not big enough (nal="+(l2+4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(new byte[] {0,0,0,1});
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (!withPrefix && hasPrefix(mVideo[i])) {
check(l1>=l2-4, "The decoder input buffer is not big enough (nal="+(l2-4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],4,mVideo[i].length-4);
}
mDecoder.queueInputBuffer(decInputIndex, 0, l2, timestamp(), 0);
i++;
} else {
if (VERBOSE) Log.d(TAG,"No buffer available !7");
}
}
// Tries to get a decoded image
decOutputIndex = mDecoder.dequeueOutputBuffer(info, 1000000/FRAMERATE);
if (decOutputIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
decOutputBuffers = mDecoder.getOutputBuffers();
} else if (decOutputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
mDecOutputFormat = mDecoder.getOutputFormat();
} else if (decOutputIndex>=0) {
if (n>2) {
// We have successfully encoded and decoded an image !
int length = info.size;
mDecodedVideo[j] = new byte[length];
decOutputBuffers[decOutputIndex].clear();
decOutputBuffers[decOutputIndex].get(mDecodedVideo[j], 0, length);
// Converts the decoded frame to NV21
convertToNV21(j);
if (j>=NB_DECODED-1) {
flushMediaCodec(mDecoder);
if (VERBOSE) Log.v(TAG, "Decoding "+n+" frames took "+elapsed/1000+" ms");
return elapsed;
}
j++;
}
mDecoder.releaseOutputBuffer(decOutputIndex, false);
n++;
}
elapsed = timestamp() - now;
}
throw new RuntimeException("The decoder did not decode anything.");
}
Here's my suggestions:
(1) check the settings of encoder and decoder, and make sure that they match. For example, revolution and color format are the same.
(2) make sure the very first packet generated by the encoder has been sent and pushed into the decoder. This packet defines the basic settings of the video stream.
(3) the decoder usually buffers 5-10 frames. So data in the buffer is invalid for a few hundred ms.
(4) while initiating the decoder, set the surface as null. Otherwise the output buffer will be read by the surface and probably released automatically.

Categories

Resources