How to extract PCM samples from MediaCodec decoder's output - android

I'm trying to obtain the PCM samples for further processing from a decoded mp4 buffer. I'm first extracting the audio track from a video file recorded with the phone's camera app, and I've made sure the audio track is being selected when I get the 'audio/mp4' mime key:
MediaExtractor extractor = new MediaExtractor();
try {
extractor.setDataSource(fileUri.getPath());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
int numTracks = extractor.getTrackCount();
for(int i =0; i<numTracks; ++i) {
MediaFormat format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
//Log.d("mime =",mime);
if(mime.startsWith("audio/")) {
extractor.selectTrack(i);
decoder = MediaCodec.createDecoderByType(mime);
decoder.configure(format, null, null, 0);
//getSampleCryptoInfo(MediaCodec.CryptoInfo info)
break;
}
}
if (decoder == null) {
Log.e("DecodeActivity", "Can't find audio info!");
return;
}
decoder.start();
After that, I iterate through the track, feeding the codec the stream of encoded access units, and pulling the decoded access units into a ByteBuffer (this is code I recycled from a video rendering example posted here https://github.com/vecio/MediaCodecDemo):
ByteBuffer[] inputBuffers = decoder.getInputBuffers();
ByteBuffer[] outputBuffers = decoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
boolean isEOS = false;
while (true) {
if (!isEOS) {
int inIndex = decoder.dequeueInputBuffer(10000);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
int sampleSize = extractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
// We shouldn't stop the playback at this point, just pass the EOS
// flag to decoder, we will get it again from the
// dequeueOutputBuffer
Log.d("DecodeActivity", "InputBuffer BUFFER_FLAG_END_OF_STREAM");
decoder.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
isEOS = true;
} else {
decoder.queueInputBuffer(inIndex, 0, sampleSize, extractor.getSampleTime(), 0);
extractor.advance();
}
}
}
int outIndex = decoder.dequeueOutputBuffer(info, 10000);
switch (outIndex) {
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
Log.d("DecodeActivity", "INFO_OUTPUT_BUFFERS_CHANGED");
outputBuffers = decoder.getOutputBuffers();
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.d("DecodeActivity", "New format " + decoder.getOutputFormat());
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.d("DecodeActivity", "dequeueOutputBuffer timed out!");
break;
default:
ByteBuffer buffer = outputBuffers[outIndex];
// How to obtain PCM samples from this buffer variable??
decoder.releaseOutputBuffer(outIndex, true);
break;
}
// All decoded frames have been rendered, we can stop playing now
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d("DecodeActivity", "OutputBuffer BUFFER_FLAG_END_OF_STREAM");
break;
}
}
The code seems to work with no errors so far, but I'm currently stuck at trying to figure out how to obtain the PCM samples from the ByteBuffer that is taking the value of the output buffer. I guess I could assume that since I'm working with 16-bit stereo audio file, there should be at least two bytes in an interleaved scheme... however I'm not really sure abut this, so to unequivocally retrieve the PCM samples from this byte stream. Does anybody know how get these from the MediaCodec API?
I've read a couple of alternatives using ffmpeg or openSL, but since I am new to Android programming I was hoping to avoid the complications of using c-based APIs and build my first app using only the tools provided by the Android Framework (I'm using KitKat). Any help will be greatly appreciated.
UPDATE: I was able to extract the PCM samples, the way I was assuming to do it and also the way#marcone pointed out. To do so, I added these lines below the buffer assignment:
byte[] b = new byte[info.size-info.offset];
int a = buffer.position();
buffer.get(b);
buffer.position(a);
and finally write the byte array to a file by:
f.write(b,0,info.size-info.offset);
The problem I'm dealing now with is:
The decoded audio samples do not exactly match with the decoding of the mp4 audio track done by iZotope. there is a 48 samples mismatch in the wave files size, and a 2112 samples delay in the decoded signals. My question now is: would all the mp4 decoders yield the same output PCM stream, or is it dependent on the implementation of the decoder?

I found the delays to be caused by the AAC encoding priming and remainder times, as explained here:
https://developer.apple.com/library/mac/documentation/quicktime/qtff/QTFFAppenG/QTFFAppenG.html
In my case, the priming time is always 2112 samples, and the remainder is naturally variable depending on the audio size.

I know the problem is solved here. But the MediaCodec is used Synchronously in the current code, which is deprecated as of now. I learn from this question and made the same thing with Async use of MediaCodec. Just posting the github link so that it might help someone later on.
Github Asynchronous implementation: link
FYI: The audioplayer used is just copy paste from some other thread for the time being. It is depricated. I will update it when i get time. Also the code is in Kotlin.(It is still easy to understand)
Please check out the Async link for official MediaCodec documentation

Related

How to play raw NAL units in Android MediaCodec

I initially tried How to play raw NAL units in Andoid exoplayer? but I noticed I'm gonna have to do things in low level.
I've found this simple MediaCodec example. As you can see, it's a thread that plays a file on a surface passed to it.
Notice the lines
mExtractor = new MediaExtractor();
mExtractor.setDataSource(filePath);
It looks like that I should create my own MediaExtractor which, instead of extracting the video units from a file, it'll use the h264 NAL units from a buffer I'll provide.
I can then call mExtractor.setDataSource(MediaDataSource dataSource), see MediaDataSource
It has readAt(long position, byte[] buffer, int offset, int size)
This is where it reads the NAL units. However, how should I pass them? I have no information on the structure of the buffer that needs to be read.
Should I pass a byte[] buffer with the NAL units in it, and if so, in which format? What is the offset for? If it's a buffer, shouldn't I just erase the lines that were read and thus have no offset or size?
By the way, the h264 NAL units are streaming ones, they come from RTP packets, not files. I'm gonna get them through C++ and store them on a buffer an try to pass to the mediaExtractor.
UPDATE:
I've been reading a lot about MediaCodec and I think I understand it better. According to https://developer.android.com/reference/android/media/MediaCodec, everything relies on something of this type:
MediaCodec codec = MediaCodec.createByCodecName(name);
MediaFormat mOutputFormat; // member variable
codec.setCallback(new MediaCodec.Callback() {
#Override
void onInputBufferAvailable(MediaCodec mc, int inputBufferId) {
ByteBuffer inputBuffer = codec.getInputBuffer(inputBufferId);
// fill inputBuffer with valid data
…
codec.queueInputBuffer(inputBufferId, …);
}
#Override
void onOutputBufferAvailable(MediaCodec mc, int outputBufferId, …) {
ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId);
MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A
// bufferFormat is equivalent to mOutputFormat
// outputBuffer is ready to be processed or rendered.
…
codec.releaseOutputBuffer(outputBufferId, …);
}
#Override
void onOutputFormatChanged(MediaCodec mc, MediaFormat format) {
// Subsequent data will conform to new format.
// Can ignore if using getOutputFormat(outputBufferId)
mOutputFormat = format; // option B
}
#Override
void onError(…) {
…
}
});
codec.configure(format, …);
mOutputFormat = codec.getOutputFormat(); // option B
codec.start();
// wait for processing to complete
codec.stop();
codec.release();
As you can see, I can pass input buffers and get decoded output buffers. The exact byte formats are still a mystery, but I think that's how it works. Also according to the same article, the usage of ByteBuffers is slow, and Surfaces are preferred. They consume the output buffers automatically. Although there's no tutorial on how to do it, there's a section in the article that says it's almost identical, so I guess I just need to add the additional lines
codec.setInputSurface(Surface inputSurface)
codec.setOutputSurface(Surface outputSurface)
Where inputSurface and outputSurface are Surfaces which I pass to a MediaPlayer which I use (how) to display the video in an activity. And the output buffers will simply not come on onOutputBufferAvailable (because the surface consumes them first), neither onInputBufferAvailable.
So the questions now are: how exactly do I construct a Surface which contains the video buffer, and how do I display a MediaPlayer into an activity
For output I can simply create a Surface and pass to a MediaPlayer and MediaCodec, but what about input? Do I need ByteBuffer for the input anyways, and Surface would just be for using other outputs as inputs?
you first need to remove the NAL units , and feed the raw H264 bytes into this method, how ever in your case your reading from the file , so no need to remove any thing since your not using packets , just feed the data bytes to this method:
rivate void initDecoder(){
try {
writeHeader = true;
if(mDecodeMediaCodec != null){
try{
mDecodeMediaCodec.stop();
}catch (Exception e){}
try{
mDecodeMediaCodec.release();
}catch (Exception e){}
}
mDecodeMediaCodec = MediaCodec.createDecoderByType(MIME_TYPE);
//MIME_TYPE = video/avc in your case
mDecodeMediaCodec.configure(format,mSurfaceView.getHolder().getSurface(),
null,
0);
mDecodeMediaCodec.start();
mDecodeInputBuffers = mDecodeMediaCodec.getInputBuffers();
} catch (IOException e) {
e.printStackTrace();
mLatch.trigger();
}
}
private void decode(byte[] data){
try {
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int inputBufferIndex = mDecodeMediaCodec.dequeueInputBuffer(1000);//
if (inputBufferIndex >= 0) {
ByteBuffer buffer = mDecodeInputBuffers[inputBufferIndex];
buffer.clear();
buffer.put(data);
mDecodeMediaCodec.queueInputBuffer(inputBufferIndex,
0,
data.length,
packet.sequence / 1000,
0);
data = null;
//decodeDataBuffer.clear();
//decodeDataBuffer = null;
}
int outputBufferIndex = mDecodeMediaCodec.dequeueOutputBuffer(info,
1000);
do {
if (outputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
//no output available yet
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
//encodeOutputBuffers = mDecodeMediaCodec.getOutputBuffers();
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
format = mDecodeMediaCodec.getOutputFormat();
//mediaformat changed
} else if (outputBufferIndex < 0) {
//unexpected result from encoder.dequeueOutputBuffer
} else {
mDecodeMediaCodec.releaseOutputBuffer(outputBufferIndex,
true);
outputBufferIndex = mDecodeMediaCodec.dequeueOutputBuffer(info,
0);
}
} while (outputBufferIndex > 0);
}
please dont forget that iFrame (the first frame bytes) contains sensitive data and MUST be fed to the decoder first

What does it mean if Android MediaCodec dequeueOutputBuffer() returns -3

I am using MediaCodec API's to play the video stream(H.264) I am receiving through ethernet port.
From what I understood from official documentation and various examples, I need to perform the following operations.
Create a MediaCodec instance based on the mimetype(video/avc for H.264)
Feed "csd-0" with SPS frame data. SPS frame should start with 0x00000001
Feed "csd-1" buffer with PPS frame data. PPS frame should start with 0x00000001
Call decoder.configure() and decoder.start(). If all goes well, decoder is correctly configured and there is no exception.
Once the MediaCodec is configured, we can supply to the decoder all the rest of the frames (depacketed NAL units).
Upon successfull dequeue operation, the decoded buffer can be rendered onto the screen as follows.
outputBufferId = decoder.dequeueOutputBuffer(info, 1000);
decoder.releaseOutputBuffer(outputBufferId, true);
Problem
decoder.dequeueOutputBuffer() Is returning -3 on android 6.0.
Official documentation says that in the case of any errors during dequeueOutputBuffer() only -1 (INFO_TRY_AGAIN_LATER) and -2 (INFO_OUTPUT_FORMAT_CHANGED) are returned and that -3 (INFO_OUTPUT_BUFFERS_CHANGED) is deprecated. So why am I receiving -3?
How am I supposed to correct this?
Code Snippet :
String mimeType = "video/avc";
decoder = MediaCodec.createDecoderByType(mimeType);
mformat = MediaFormat.createVideoFormat(mimeType, 1920, 1080);
while (!Thread.interrupted()) {
byte[] data = new byte[size];
bin.read(data, 0, size);
if (data is SPS frame) {
mformat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
mformat.setByteBuffer("csd-0", ByteBuffer.wrap(data));
continue;
}
if (data is PPS frame) {
mformat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
mformat.setByteBuffer("csd-1", ByteBuffer.wrap(data));
decoder.configure(mformat. surface, null, 0);
decoder.start();
is_decoder_configured = true;
continue;
}
if (!is_decoder_configured)
continue;
index = decoder.dequeueInputBuffer(1000);
if (index < 0) {
Log.e(TAG, "Dequeue in put buffer failed..\n");
continue;
}
buf = decoder.getInputBuffer(index);
if (buf == null)
continue;
buf.put(data);
decoder.queueInputBuffer(index, 0, data.length, 0, 0);
int outputBufferId = decoder.dequeueOutputBuffer(info, 1000);
switch (outputBufferId) {
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.i("DecodeActivity", "New format " + decoder.getOutputFormat());
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.i("DecodeActivity", "dequeueOutputBuffer timed out!");
break;
default:
Log.i(TAG, "Successfully decoded the output : " + outputBufferId);
decoder.releaseOutputBuffer(outputBufferId, true);
break;
}
}
if (is_decoder_configured()) {
decoder.stop();
decoder.release();
}
In the case you see any other errors, please let me know. Will be grateful!
I don't see anything saying that it will never be returned. The documentation says this:
This constant was deprecated in API level 21.
This return value can be ignored as getOutputBuffers() has been deprecated. Client should request a current buffer using on of the get-buffer or get-image methods each time one has been dequeued.
That is, if you're using getOutputBuffers(), you need to listen for this return value and act on it - but doing this is deprecated. If you're not using getOutputBuffers(), just ignore this return value, i.e. call dequeueOutputBuffer() again with the same parameters and see what it returns afterwards.

memdiacodec decoder working on android 5.0 but not on android 4.4

i am trying to decode h264 stream use MediaCodec on Android.
i use csd-0, csd-1 to set sps and pps like this.
MediaFormat format = MediaFormat.createVideoFormat(MIME, WIDTH, HEIGHT);
format.setByteBuffer("csd-0", sps);
format.setByteBuffer("csd-1", pps);
i configure the decoder with output surface to render the output data.
decoder.configure(format, surface, null, 0);
then, i just start a thread to decode the data i received from the network using this loop.
while (running) {
if (!isEOS) {
int inIndex = decoder.dequeueInputBuffer(10000);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
byte[] data = getVideoPacket(); // native method that get data from network
buffer.put(data);
decoder.queueInputBuffer(inIndex, 0, data.length, 0L, 0);
}
}
int outIndex = decoder.dequeueOutputBuffer(info, 10000);
switch (outIndex) {
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
Log.d("DecodeActivity", "INFO_OUTPUT_BUFFERS_CHANGED");
outputBuffers = decoder.getOutputBuffers();
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.d("DecodeActivity", "New format " + decoder.getOutputFormat());
MediaFormat format = decoder.getOutputFormat();
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.d("DecodeActivity", "dequeueOutputBuffer timed out!");
break;
default:
ByteBuffer buffer = outputBuffers[outIndex];
Log.v("DecodeActivity", "We can't use this buffer but render it due to the API limit, " + buffer);
decoder.releaseOutputBuffer(outIndex, true); // render
break;
}
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d("DecodeActivity", "OutputBuffer BUFFER_FLAG_END_OF_STREAM");
break;
}
}
note that i use 0 for presentationTimestamp of queueInputBuffer since the data i got is from network socket, and i don't know how to compute the appropriate timestamp.
i got what i want on android 5.0(and above), but when i run these code on android 4.4, nothing shows up on the surface(actually i can see same picture fragment occasionally, but most of time, the surface is just black).
i tried to fill the input buffer with data fetched from file(use MediaExtractor), and it works well on both android 5.0 and 4.4.
any help would be appreciate.
it seems that the problem is the way to fetch data. i used to get data from native layer:
public static native byte[] getVideoPacket();
and now i push data from native layer to java layer:
env->CallVoidMethod(..., data);
everything just work now. i don't know why, but this way works for me.

Android, use Mediacodec with libstreaming

I've a problem with this library
https://github.com/fyhertz/libstreaming
it allows to send via wireless the streaming of photocamera, it use 3 methods: two with mediacodec and one with mediarecorder.
I would like to modify it, and I have to use only the mediacodec;however first of all I tried the code of the example 2 of the library, but I've always found the same error:
the log tell me that the device can use the mediacodec, it set the encoder and when it test the decoder it fall and the buffer is filled with -1.
This is the method in the EncoderDebugger class where the exception occurs, some kind soul can help me please?
private long decode(boolean withPrefix) {
int n =3, i = 0, j = 0;
long elapsed = 0, now = timestamp();
int decInputIndex = 0, decOutputIndex = 0;
ByteBuffer[] decInputBuffers = mDecoder.getInputBuffers();
ByteBuffer[] decOutputBuffers = mDecoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
while (elapsed<3000000) {
// Feeds the decoder with a NAL unit
if (i<NB_ENCODED) {
decInputIndex = mDecoder.dequeueInputBuffer(1000000/FRAMERATE);
if (decInputIndex>=0) {
int l1 = decInputBuffers[decInputIndex].capacity();
int l2 = mVideo[i].length;
decInputBuffers[decInputIndex].clear();
if ((withPrefix && hasPrefix(mVideo[i])) || (!withPrefix && !hasPrefix(mVideo[i]))) {
check(l1>=l2, "The decoder input buffer is not big enough (nal="+l2+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (withPrefix && !hasPrefix(mVideo[i])) {
check(l1>=l2+4, "The decoder input buffer is not big enough (nal="+(l2+4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(new byte[] {0,0,0,1});
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (!withPrefix && hasPrefix(mVideo[i])) {
check(l1>=l2-4, "The decoder input buffer is not big enough (nal="+(l2-4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],4,mVideo[i].length-4);
}
mDecoder.queueInputBuffer(decInputIndex, 0, l2, timestamp(), 0);
i++;
} else {
if (VERBOSE) Log.d(TAG,"No buffer available !7");
}
}
// Tries to get a decoded image
decOutputIndex = mDecoder.dequeueOutputBuffer(info, 1000000/FRAMERATE);
if (decOutputIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
decOutputBuffers = mDecoder.getOutputBuffers();
} else if (decOutputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
mDecOutputFormat = mDecoder.getOutputFormat();
} else if (decOutputIndex>=0) {
if (n>2) {
// We have successfully encoded and decoded an image !
int length = info.size;
mDecodedVideo[j] = new byte[length];
decOutputBuffers[decOutputIndex].clear();
decOutputBuffers[decOutputIndex].get(mDecodedVideo[j], 0, length);
// Converts the decoded frame to NV21
convertToNV21(j);
if (j>=NB_DECODED-1) {
flushMediaCodec(mDecoder);
if (VERBOSE) Log.v(TAG, "Decoding "+n+" frames took "+elapsed/1000+" ms");
return elapsed;
}
j++;
}
mDecoder.releaseOutputBuffer(decOutputIndex, false);
n++;
}
elapsed = timestamp() - now;
}
throw new RuntimeException("The decoder did not decode anything.");
}
Here's my suggestions:
(1) check the settings of encoder and decoder, and make sure that they match. For example, revolution and color format are the same.
(2) make sure the very first packet generated by the encoder has been sent and pushed into the decoder. This packet defines the basic settings of the video stream.
(3) the decoder usually buffers 5-10 frames. So data in the buffer is invalid for a few hundred ms.
(4) while initiating the decoder, set the surface as null. Otherwise the output buffer will be read by the surface and probably released automatically.

Using MediaCodec to save series of images as Video

I am trying to use MediaCodec to save a series of Images, saved as Byte Arrays in a file, to a video file. I have tested these images on a SurfaceView (playing them in series) and I can see them fine. I have looked at many examples using MediaCodec, and here is what I understand (please correct me if I am wrong):
Get InputBuffers from MediaCodec object -> fill it with your frame's
image data -> queue the input buffer -> get coded output buffer ->
write it to a file -> increase presentation time and repeat
However, I have tested this a lot and I end up with one of two cases:
All sample projects I tried to imitate have caused Media server to die when calling queueInputBuffer for the second time.
I tried calling codec.flush() at the end (after saving output buffer to file, although none of the examples I saw did this) and the media server did not die, however, I am not able to open the output video file with any media player, so something is wrong.
Here is my code:
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE);
MediaFormat mediaFormat = null;
if(CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_720P)){
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 1280 , 720);
} else {
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 720, 480);
}
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 700000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 10);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
codec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
boolean sawInputEOS = false;
int inputBufferIndex=-1,outputBufferIndex=-1;
BufferInfo info=null;
//loop to read YUV byte array from file
inputBufferIndex = codec.dequeueInputBuffer(WAITTIME);
if(bytesread<=0)sawInputEOS=true;
if(inputBufferIndex >= 0){
if(!sawInputEOS){
int samplesiz=dat.length;
inputBuffers[inputBufferIndex].put(dat);
codec.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, 0);
presentationTime += 100;
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
Log.i("BATA", "outputBufferIndex="+outputBufferIndex);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
if(sawInputEOS) break;
}
}else{
codec.queueInputBuffer(inputBufferIndex, 0, 0, presentationTime, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
break;
}
}
}
}
codec.flush();
try {
fstream2.close();
dos.flush();
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
codec.stop();
codec.release();
codec = null;
return true;
}
My question is, how can I get a working video from a stream of images using MediaCodec. What am I doing wrong?
Another question (if I am not too greedy), I would like to add an Audio track to this video, can it be done with MediaCodec as well, or must I use FFmpeg?
Note: I know about MediaMux in Android 4.3, however, it is not an option for me as my App must work on Android 4.1+.
Update
Thanks to fadden answer, I was able to reach EOS without Media server dying (Above code is after modification). However, the file I am getting is producing gibberish. Here is a snapshot of the video I get (only works as .h264 file).
My Input image format is YUV image (NV21 from camera preview). I can't get it to be any playable format. I tried all COLOR_FormatYUV420 formats and same gibberish output. And I still can't find away (using MediaCodec) to add audio.
I think you have the right general idea. Some things to be aware of:
Not all devices support COLOR_FormatYUV420SemiPlanar. Some only accept planar. (Android 4.3 introduced CTS tests to ensure that the AVC codec supports one or the other.)
It's not the case that queueing an input buffer will immediately result in the generation of one output buffer. Some codecs may accumulate several frames of input before producing output, and may produce output after your input has finished. Make sure your loops take that into account (e.g. your inputBuffers[].clear() will blow up if it's still -1).
Don't try to submit data and send EOS with the same queueInputBuffer call. The data in that frame may be discarded. Always send EOS with a zero-length buffer.
The output of the codecs is generally pretty "raw", e.g. the AVC codec emits an H.264 elementary stream rather than a "cooked" .mp4 file. Many players won't accept this format. If you can't rely on the presence of MediaMuxer you will need to find another way to cook the data (search around on stackoverflow for ideas).
It's certainly not expected that the mediaserver process would crash.
You can find some examples and links to the 4.3 CTS tests here.
Update: As of Android 4.3, MediaCodec and Camera have no ByteBuffer formats in common, so at the very least you will need to fiddle with the chroma planes. However, that sort of problem manifests very differently (as shown in the images for this question).
The image you added looks like video, but with stride and/or alignment issues. Make sure your pixels are laid out correctly. In the CTS EncodeDecodeTest, the generateFrame() method (line 906) shows how to encode both planar and semi-planar YUV420 for MediaCodec.
The easiest way to avoid the format issues is to move the frames through a Surface (like the CameraToMpegTest sample), but unfortunately that's not possible in Android 4.1.

Categories

Resources