I am trying to use MediaCodec to save a series of Images, saved as Byte Arrays in a file, to a video file. I have tested these images on a SurfaceView (playing them in series) and I can see them fine. I have looked at many examples using MediaCodec, and here is what I understand (please correct me if I am wrong):
Get InputBuffers from MediaCodec object -> fill it with your frame's
image data -> queue the input buffer -> get coded output buffer ->
write it to a file -> increase presentation time and repeat
However, I have tested this a lot and I end up with one of two cases:
All sample projects I tried to imitate have caused Media server to die when calling queueInputBuffer for the second time.
I tried calling codec.flush() at the end (after saving output buffer to file, although none of the examples I saw did this) and the media server did not die, however, I am not able to open the output video file with any media player, so something is wrong.
Here is my code:
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE);
MediaFormat mediaFormat = null;
if(CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_720P)){
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 1280 , 720);
} else {
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 720, 480);
}
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 700000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 10);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
codec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
boolean sawInputEOS = false;
int inputBufferIndex=-1,outputBufferIndex=-1;
BufferInfo info=null;
//loop to read YUV byte array from file
inputBufferIndex = codec.dequeueInputBuffer(WAITTIME);
if(bytesread<=0)sawInputEOS=true;
if(inputBufferIndex >= 0){
if(!sawInputEOS){
int samplesiz=dat.length;
inputBuffers[inputBufferIndex].put(dat);
codec.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, 0);
presentationTime += 100;
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
Log.i("BATA", "outputBufferIndex="+outputBufferIndex);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
if(sawInputEOS) break;
}
}else{
codec.queueInputBuffer(inputBufferIndex, 0, 0, presentationTime, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
break;
}
}
}
}
codec.flush();
try {
fstream2.close();
dos.flush();
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
codec.stop();
codec.release();
codec = null;
return true;
}
My question is, how can I get a working video from a stream of images using MediaCodec. What am I doing wrong?
Another question (if I am not too greedy), I would like to add an Audio track to this video, can it be done with MediaCodec as well, or must I use FFmpeg?
Note: I know about MediaMux in Android 4.3, however, it is not an option for me as my App must work on Android 4.1+.
Update
Thanks to fadden answer, I was able to reach EOS without Media server dying (Above code is after modification). However, the file I am getting is producing gibberish. Here is a snapshot of the video I get (only works as .h264 file).
My Input image format is YUV image (NV21 from camera preview). I can't get it to be any playable format. I tried all COLOR_FormatYUV420 formats and same gibberish output. And I still can't find away (using MediaCodec) to add audio.
I think you have the right general idea. Some things to be aware of:
Not all devices support COLOR_FormatYUV420SemiPlanar. Some only accept planar. (Android 4.3 introduced CTS tests to ensure that the AVC codec supports one or the other.)
It's not the case that queueing an input buffer will immediately result in the generation of one output buffer. Some codecs may accumulate several frames of input before producing output, and may produce output after your input has finished. Make sure your loops take that into account (e.g. your inputBuffers[].clear() will blow up if it's still -1).
Don't try to submit data and send EOS with the same queueInputBuffer call. The data in that frame may be discarded. Always send EOS with a zero-length buffer.
The output of the codecs is generally pretty "raw", e.g. the AVC codec emits an H.264 elementary stream rather than a "cooked" .mp4 file. Many players won't accept this format. If you can't rely on the presence of MediaMuxer you will need to find another way to cook the data (search around on stackoverflow for ideas).
It's certainly not expected that the mediaserver process would crash.
You can find some examples and links to the 4.3 CTS tests here.
Update: As of Android 4.3, MediaCodec and Camera have no ByteBuffer formats in common, so at the very least you will need to fiddle with the chroma planes. However, that sort of problem manifests very differently (as shown in the images for this question).
The image you added looks like video, but with stride and/or alignment issues. Make sure your pixels are laid out correctly. In the CTS EncodeDecodeTest, the generateFrame() method (line 906) shows how to encode both planar and semi-planar YUV420 for MediaCodec.
The easiest way to avoid the format issues is to move the frames through a Surface (like the CameraToMpegTest sample), but unfortunately that's not possible in Android 4.1.
Related
I'm pulling video from a drone that encodes the video stream in h264. It sends each NAL unit via 1-5 UDP packets. I have code that creates a NAL unit out of those packets and removes the header. It then passes it to MediaCodec which then passes it to a Surface object
The output should be a video feed but for some reason, all I get is a black screen. I know that the surface is working as intended because if I futz with the NAL unit I get this green garbage that I think happens when MediaCodec doesn't know what it's looking at.
Anyways here's the section of code that handles the actual decoding. Is there anything actually wrong with it or am I looking for the issue in the wrong place?
//This part initializes the decoder and generally sets up everything needed for the while loop down below
encodedVideo = new ServerSocket(11111, 1460, false, 0);
MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 1920, 1080);
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 100000);
try {
m_codec = MediaCodec.createDecoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
m_codec.configure(format, new Surface(droneSight.getSurfaceTexture()), null, 0);
} catch (Exception e) {
e.printStackTrace();
}
m_codec.start();
running = true;
initialFrame = Arrays.copyOf(encodedVideo.getPacketData(true,true),encodedVideo.getPacketData(true,false).length);
//This section handles the actual grabbing and decoding of each NAL unit. Or it would, if it worked.
while (running) {
byte[] frame = this.getNALUnit();
int inputIndex = m_codec.dequeueInputBuffer(-1);
if (inputIndex >= 0) {
ByteBuffer buf = m_codec.getInputBuffer(inputIndex);
buf.put(frame);
m_codec.queueInputBuffer(inputIndex, 0, frame.length, 0, 0);
}
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int outputIndex = m_codec.dequeueOutputBuffer(info, 0);
if (outputIndex >= 0) {
m_codec.releaseOutputBuffer(outputIndex, true);
}
}
I initially tried How to play raw NAL units in Andoid exoplayer? but I noticed I'm gonna have to do things in low level.
I've found this simple MediaCodec example. As you can see, it's a thread that plays a file on a surface passed to it.
Notice the lines
mExtractor = new MediaExtractor();
mExtractor.setDataSource(filePath);
It looks like that I should create my own MediaExtractor which, instead of extracting the video units from a file, it'll use the h264 NAL units from a buffer I'll provide.
I can then call mExtractor.setDataSource(MediaDataSource dataSource), see MediaDataSource
It has readAt(long position, byte[] buffer, int offset, int size)
This is where it reads the NAL units. However, how should I pass them? I have no information on the structure of the buffer that needs to be read.
Should I pass a byte[] buffer with the NAL units in it, and if so, in which format? What is the offset for? If it's a buffer, shouldn't I just erase the lines that were read and thus have no offset or size?
By the way, the h264 NAL units are streaming ones, they come from RTP packets, not files. I'm gonna get them through C++ and store them on a buffer an try to pass to the mediaExtractor.
UPDATE:
I've been reading a lot about MediaCodec and I think I understand it better. According to https://developer.android.com/reference/android/media/MediaCodec, everything relies on something of this type:
MediaCodec codec = MediaCodec.createByCodecName(name);
MediaFormat mOutputFormat; // member variable
codec.setCallback(new MediaCodec.Callback() {
#Override
void onInputBufferAvailable(MediaCodec mc, int inputBufferId) {
ByteBuffer inputBuffer = codec.getInputBuffer(inputBufferId);
// fill inputBuffer with valid data
…
codec.queueInputBuffer(inputBufferId, …);
}
#Override
void onOutputBufferAvailable(MediaCodec mc, int outputBufferId, …) {
ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId);
MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A
// bufferFormat is equivalent to mOutputFormat
// outputBuffer is ready to be processed or rendered.
…
codec.releaseOutputBuffer(outputBufferId, …);
}
#Override
void onOutputFormatChanged(MediaCodec mc, MediaFormat format) {
// Subsequent data will conform to new format.
// Can ignore if using getOutputFormat(outputBufferId)
mOutputFormat = format; // option B
}
#Override
void onError(…) {
…
}
});
codec.configure(format, …);
mOutputFormat = codec.getOutputFormat(); // option B
codec.start();
// wait for processing to complete
codec.stop();
codec.release();
As you can see, I can pass input buffers and get decoded output buffers. The exact byte formats are still a mystery, but I think that's how it works. Also according to the same article, the usage of ByteBuffers is slow, and Surfaces are preferred. They consume the output buffers automatically. Although there's no tutorial on how to do it, there's a section in the article that says it's almost identical, so I guess I just need to add the additional lines
codec.setInputSurface(Surface inputSurface)
codec.setOutputSurface(Surface outputSurface)
Where inputSurface and outputSurface are Surfaces which I pass to a MediaPlayer which I use (how) to display the video in an activity. And the output buffers will simply not come on onOutputBufferAvailable (because the surface consumes them first), neither onInputBufferAvailable.
So the questions now are: how exactly do I construct a Surface which contains the video buffer, and how do I display a MediaPlayer into an activity
For output I can simply create a Surface and pass to a MediaPlayer and MediaCodec, but what about input? Do I need ByteBuffer for the input anyways, and Surface would just be for using other outputs as inputs?
you first need to remove the NAL units , and feed the raw H264 bytes into this method, how ever in your case your reading from the file , so no need to remove any thing since your not using packets , just feed the data bytes to this method:
rivate void initDecoder(){
try {
writeHeader = true;
if(mDecodeMediaCodec != null){
try{
mDecodeMediaCodec.stop();
}catch (Exception e){}
try{
mDecodeMediaCodec.release();
}catch (Exception e){}
}
mDecodeMediaCodec = MediaCodec.createDecoderByType(MIME_TYPE);
//MIME_TYPE = video/avc in your case
mDecodeMediaCodec.configure(format,mSurfaceView.getHolder().getSurface(),
null,
0);
mDecodeMediaCodec.start();
mDecodeInputBuffers = mDecodeMediaCodec.getInputBuffers();
} catch (IOException e) {
e.printStackTrace();
mLatch.trigger();
}
}
private void decode(byte[] data){
try {
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int inputBufferIndex = mDecodeMediaCodec.dequeueInputBuffer(1000);//
if (inputBufferIndex >= 0) {
ByteBuffer buffer = mDecodeInputBuffers[inputBufferIndex];
buffer.clear();
buffer.put(data);
mDecodeMediaCodec.queueInputBuffer(inputBufferIndex,
0,
data.length,
packet.sequence / 1000,
0);
data = null;
//decodeDataBuffer.clear();
//decodeDataBuffer = null;
}
int outputBufferIndex = mDecodeMediaCodec.dequeueOutputBuffer(info,
1000);
do {
if (outputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
//no output available yet
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
//encodeOutputBuffers = mDecodeMediaCodec.getOutputBuffers();
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
format = mDecodeMediaCodec.getOutputFormat();
//mediaformat changed
} else if (outputBufferIndex < 0) {
//unexpected result from encoder.dequeueOutputBuffer
} else {
mDecodeMediaCodec.releaseOutputBuffer(outputBufferIndex,
true);
outputBufferIndex = mDecodeMediaCodec.dequeueOutputBuffer(info,
0);
}
} while (outputBufferIndex > 0);
}
please dont forget that iFrame (the first frame bytes) contains sensitive data and MUST be fed to the decoder first
I'm trying to obtain the PCM samples for further processing from a decoded mp4 buffer. I'm first extracting the audio track from a video file recorded with the phone's camera app, and I've made sure the audio track is being selected when I get the 'audio/mp4' mime key:
MediaExtractor extractor = new MediaExtractor();
try {
extractor.setDataSource(fileUri.getPath());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
int numTracks = extractor.getTrackCount();
for(int i =0; i<numTracks; ++i) {
MediaFormat format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
//Log.d("mime =",mime);
if(mime.startsWith("audio/")) {
extractor.selectTrack(i);
decoder = MediaCodec.createDecoderByType(mime);
decoder.configure(format, null, null, 0);
//getSampleCryptoInfo(MediaCodec.CryptoInfo info)
break;
}
}
if (decoder == null) {
Log.e("DecodeActivity", "Can't find audio info!");
return;
}
decoder.start();
After that, I iterate through the track, feeding the codec the stream of encoded access units, and pulling the decoded access units into a ByteBuffer (this is code I recycled from a video rendering example posted here https://github.com/vecio/MediaCodecDemo):
ByteBuffer[] inputBuffers = decoder.getInputBuffers();
ByteBuffer[] outputBuffers = decoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
boolean isEOS = false;
while (true) {
if (!isEOS) {
int inIndex = decoder.dequeueInputBuffer(10000);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
int sampleSize = extractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
// We shouldn't stop the playback at this point, just pass the EOS
// flag to decoder, we will get it again from the
// dequeueOutputBuffer
Log.d("DecodeActivity", "InputBuffer BUFFER_FLAG_END_OF_STREAM");
decoder.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
isEOS = true;
} else {
decoder.queueInputBuffer(inIndex, 0, sampleSize, extractor.getSampleTime(), 0);
extractor.advance();
}
}
}
int outIndex = decoder.dequeueOutputBuffer(info, 10000);
switch (outIndex) {
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
Log.d("DecodeActivity", "INFO_OUTPUT_BUFFERS_CHANGED");
outputBuffers = decoder.getOutputBuffers();
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.d("DecodeActivity", "New format " + decoder.getOutputFormat());
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.d("DecodeActivity", "dequeueOutputBuffer timed out!");
break;
default:
ByteBuffer buffer = outputBuffers[outIndex];
// How to obtain PCM samples from this buffer variable??
decoder.releaseOutputBuffer(outIndex, true);
break;
}
// All decoded frames have been rendered, we can stop playing now
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d("DecodeActivity", "OutputBuffer BUFFER_FLAG_END_OF_STREAM");
break;
}
}
The code seems to work with no errors so far, but I'm currently stuck at trying to figure out how to obtain the PCM samples from the ByteBuffer that is taking the value of the output buffer. I guess I could assume that since I'm working with 16-bit stereo audio file, there should be at least two bytes in an interleaved scheme... however I'm not really sure abut this, so to unequivocally retrieve the PCM samples from this byte stream. Does anybody know how get these from the MediaCodec API?
I've read a couple of alternatives using ffmpeg or openSL, but since I am new to Android programming I was hoping to avoid the complications of using c-based APIs and build my first app using only the tools provided by the Android Framework (I'm using KitKat). Any help will be greatly appreciated.
UPDATE: I was able to extract the PCM samples, the way I was assuming to do it and also the way#marcone pointed out. To do so, I added these lines below the buffer assignment:
byte[] b = new byte[info.size-info.offset];
int a = buffer.position();
buffer.get(b);
buffer.position(a);
and finally write the byte array to a file by:
f.write(b,0,info.size-info.offset);
The problem I'm dealing now with is:
The decoded audio samples do not exactly match with the decoding of the mp4 audio track done by iZotope. there is a 48 samples mismatch in the wave files size, and a 2112 samples delay in the decoded signals. My question now is: would all the mp4 decoders yield the same output PCM stream, or is it dependent on the implementation of the decoder?
I found the delays to be caused by the AAC encoding priming and remainder times, as explained here:
https://developer.apple.com/library/mac/documentation/quicktime/qtff/QTFFAppenG/QTFFAppenG.html
In my case, the priming time is always 2112 samples, and the remainder is naturally variable depending on the audio size.
I know the problem is solved here. But the MediaCodec is used Synchronously in the current code, which is deprecated as of now. I learn from this question and made the same thing with Async use of MediaCodec. Just posting the github link so that it might help someone later on.
Github Asynchronous implementation: link
FYI: The audioplayer used is just copy paste from some other thread for the time being. It is depricated. I will update it when i get time. Also the code is in Kotlin.(It is still easy to understand)
Please check out the Async link for official MediaCodec documentation
I'm working on video transcoding in Android, and using the standard method as these samples to extract/decode a video. I test the same process on different devices with different video devices, and I found a problem on the frame count of decoder input/output.
For some timecode issues as in this question, I use a queue to record the extracted video samples, and check the queue when I got a decoder frame output, like the following codes:
(I omit the encoding-related codes to make it clearer)
Queue<Long> sample_time_queue = new LinkedList<Long>();
....
// in transcoding loop
if (is_decode_input_done == false)
{
int decode_input_index = decoder.dequeueInputBuffer(TIMEOUT_USEC);
if (decode_input_index >= 0)
{
ByteBuffer decoder_input_buffer = decode_input_buffers[decode_input_index];
int sample_size = extractor.readSampleData(decoder_input_buffer, 0);
if (sample_size < 0)
{
decoder.queueInputBuffer(decode_input_index, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
is_decode_input_done = true;
}
else
{
long sample_time = extractor.getSampleTime();
decoder.queueInputBuffer(decode_input_index, 0, sample_size, sample_time, 0);
sample_time_queue.offer(sample_time);
extractor.advance();
}
}
else
{
DumpLog(TAG, "Decoder dequeueInputBuffer timed out! Try again later");
}
}
....
if (is_decode_output_done == false)
{
int decode_output_index = decoder.dequeueOutputBuffer(decode_buffer_info, TIMEOUT_USEC);
switch (decode_output_index)
{
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
{
....
break;
}
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
{
....
break;
}
case MediaCodec.INFO_TRY_AGAIN_LATER:
{
DumpLog(TAG, "Decoder dequeueOutputBuffer timed out! Try again later");
break;
}
default:
{
ByteBuffer decode_output_buffer = decode_output_buffers[decode_output_index];
long ptime_us = decode_buffer_info.presentationTimeUs;
boolean is_decode_EOS = ((decode_buffer_info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0);
if (is_decode_EOS)
{
// Decoder gives an EOS output.
is_decode_output_done = true;
....
}
else
{
// The frame time may not be consistent for some videos.
// As a workaround, we use a frame time queue to guard this.
long sample_time = sample_time_queue.poll();
if (sample_time == ptime_us)
{
// Very good, the decoder input/output time is consistent.
}
else
{
// If the decoder input/output frame count is consistent, we can trust the sample time.
ptime_us = sample_time;
}
// process this frame
....
}
decoder.releaseOutputBuffer(decode_output_index, false);
}
}
}
In some cases, the queue can "correct" the PTS if the decoder gives error value (e.g. a lot of 0s). However, there are still some issues about the frame count of decoder input/output.
On an HTC One 801e device, I use the codec OMX.qcom.video.decoder.avc to decode the video (with MIME types video/avc). The sample time and PTS is matched well for the frames, except the last one.
For example, if the extractor feeds 100 frames and then EOS to the decoder, the first 99 decoded frames has the exactly same time values, but the last frame is missing and I get output EOS from the decoder. I test different videos encoded by the built-in camera, by ffmpeg muxer, or by a video processing AP on Windows. All of them have the last one frame disappeared.
On some pads with OMX.MTK.VIDEO.DECODER.AVC codec, things becomes more confused. Some videos has good PTS from the decoder and the input/output frame count is correct (i.e. the queue is empty when the decoding is done.). Some videos has consistent input/output frame count with bad PTS in decoder output (and I can still correct them by the queue). For some videos, a lot of frames are missing during the decoding. For example, the extractor get 210 frames in a 7 second video, but the decoder only output the last 180 frames. It is impossible to recover the PTS using the same workaround.
Is there any way to expect the input/output frame count for a MediaCodec decoder? Or more accurately, to know which frame(s) are dropped by the decoder while the extractor gives it video samples with correct sample time?
Same basic story as in the other question. Pre-4.3, there were no tests confirming that every frame fed to an encoder or decoder came out the other side. I recall that some devices would reliably drop the last frame in certain tests until the codecs were fixed in 4.3.
I didn't search for a workaround at the time, so I don't know if one exists. Delaying before sending EOS might help if it's causing something to shut down early.
I don't believe I ever saw a device drop large numbers of frames. This seems like an unusual case, as it would have been noticeable in any apps that exercised MediaCodec in similar ways even without careful testing.
I am using JB's Hardware Media Codec . I am trying to encode a video and decode it and display using the codecs (in video/avc format)...
I am using two buttons to "start" and "stop" the video rendering. The first time, when I render the video it is displayed correctly. When I start the video second time, it is not getting displayed and throws the following error:
"NOT in AVI Mode"
I copy paste the code snippets for the start and Stop button.
public void Stop(){
try {
//stopping the decoder alone
decoderMediaCodec.flush();
decoderMediaCodec.stop();
decoderMediaCodec.release();
//Tried with various combination of flush(), stop() and release();
} catch (Exception e) {
e.printStackTrace();
}
public void Start(Surface view){
try {
decoderMediaCodec = MediaCodec.createDecoderByType(mime);//Initialize the decoder again
MediaFormat format = MediaFormat.createVideoFormat(mime, mWidth, mHeight);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
format.setInteger(MediaFormat.KEY_BIT_RATE, bitrate);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
format.setInteger(MediaFormat.KEY_FRAME_RATE, framerate);
decoderMediaCodec.configure(format, view, null, 0);
decoderMediaCodec.start();
} catch (Exception e) {
e.printStackTrace();
}
}
Kindly help me with the video rendering.
Note : data received in decoder is valid... data is checked using the beyond compare tool
I am getting -1 for outputBufferIndex
int outputBufferIndex = decoderMediaCodec.dequeueOutputBuffer(bufferInfo, 0);
In logs i get
E/( 271):
E/( 271):not in avi mode
E/( 271):
E/( 271): not in avi mode
It would be good if you could share more logs when you encounter the issue. From the description of your issue, could you confirm that the Surface being passed for your 2nd Start call is a valid handle?
If you can rebuild android, probably enabling log traces in Mediacodec.cpp would be helpful, specifically Mediacodec::setNativeWindow method.
P.S: For a decoder, why are I-frame interval, bitrate and framerate being set?