I am using Mediacodec on Android for H264 stream Decoding.
The raw data stream consists of a series of NAL Units.
Every frame(640*480) in the video is divided into four parts in the stream.
Each time I send a buffer(One NAL Unit) into Mediacodec Decoder and wait for the outputbuffer.
It turns out that the outputbuffer capacity is as big as a YUV Frame,but the number of outputbuffers that come up is nearly four times the Frame number.I
guess the outputbuffer only returns 1/4 of the decode result.If it is so ,how can I get continuous YUV Frame Data ?
public void setDataToDecoder(byte[] videoBuffer,int size,int id){
Log.e("Decoder","-----------"+id);
inputBufferIndex = mediaCodec.dequeueInputBuffer(-1);
//Log.e("Decoder","InputIndex "+inputBufferIndex);
if(inputBufferIndex>=0){
ByteBuffer inputBuffer=inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(videoBuffer,0,size);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, size, timeStamp, 0);
timeStamp++;
}
BufferInfo bufferInfo = new BufferInfo();
outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
while(outputBufferIndex>=0){
ByteBuffer outputData=outputBuffers[outputBufferIndex];
if(bufferInfo.size!=0){
outputData.position(bufferInfo.offset);
outputData.limit(bufferInfo.offset+bufferInfo.size);
Log.e("P","oooo"+outputData.position());
Log.e("P","oooo"+outputData.limit());
}
int tmp;
if(FrameCt<=20){
try {
tmp = fc.write(outputData);
Log.e("P",""+tmp);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
FrameCt++;
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
}
Log.e("Decoder","-----------"+id);
}
Related
So I am trying to decode a stream of raw h264 data and render it to a surface on Android. Here are the steps:
Receive a packet of h264 stream
Accumulate it and try to extract NAL units (byte sequences starting with 00 00 00 01 (NAL header) and up until the next NAL header.
For every extracted NAL unit, call feedFrame(data) where data is a byte[] that starts with NAL header and contains the extracted unit.
See the video rendered on the surface I provided.
The following code does utilizes the AVC decoder:
public StreamReceiver(DashCamActivity activity, Surface surface, int width, int height, byte[] sps, byte[] pps) {
this.activity = activity;
decoder = MediaCodec.createDecoderByType("video/avc");
format.setByteBuffer("csd-0", ByteBuffer.wrap(sps));
format.setByteBuffer("csd-1", ByteBuffer.wrap(pps));
decoder.configure(format, surface, null, 0);
decoder.start();
}
public void shutdown()
{
decoder.stop();
decoder.release();
}
public void feedFrame(byte[] data)
{
BufferInfo info = new BufferInfo();
int inputIndex = decoder.dequeueInputBuffer(1000);
if(inputIndex == -1)
return;
ByteBuffer inputBuffer = decoder.getInputBuffers()[inputIndex];
if (inputIndex >= 0) {
inputBuffer.clear();
inputBuffer.put(data, 0, data.length);
inputBuffer.clear();
decoder.queueInputBuffer(inputIndex, 0, data.length, 0, 0);
}
int outIndex = decoder.dequeueOutputBuffer(info, 1000);
switch (outIndex) {
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
break;
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
break;
default:
decoder.releaseOutputBuffer(outIndex, true);
break;
}
}
For smaller resolutions (1024x768, 1280x800) everything works perfectly. However with larger resolutions (1920x1080, 1900x600), where the length of the byte array I provide is above 65535 (64k), the video starts having stutters and artifacts and Logcat reports strange decoder errors (e.g. IOCTL_MFC_DEC_EXE failed(ret : -2001) on Galaxy S3).
This also happens on a relatively new device that can play 4k with twice the framerate I provide. So I must be doing something wrong, and I don't know if my 64k theory has any truth in it, it's merely an observation.
So to recap:
I am providing individual NAL units to the decoder, starting with the
header.
The h264 stream is of baseline profile, level 4.0.
Writing the contents of the NAL units to a file in the order they arrive produces a video file that is fully playable using the basic media players
How do I get it to play at high resolutions?
I'm receiving rtp packets containing aac audio chunks encoded by libvo_aacenc (44100hz 128kbps 2ch) from a FFServer instance. I'm trying to decode them with MediaCodec one by one in Android and playback as soon as the chunk is decoded.
Client.java
Player player = new Player();
//RTSP listener
#Override
public void onRTSPPacketReceived(RTPpacket packet) {
byte [] aac_chunk = packet.getpayload();
player.playAAC(aac_chunk);
}
Player.java
private MediaCodec decoder;
private AudioTrack audioTrack;
private MediaExtractor extractor;
public Player(){
extractor = new MediaExtractor();
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
44100, AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
44100,
AudioTrack.MODE_STREAM);
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_BIT_RATE, 128 * 1024);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 2);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
try{
decoder = MediaCodec.createDecoderByType("audio/mp4a-latm");
decoder.configure(format, null, null, 0);
} catch (IOException e) {
e.printStackTrace();
}
decoder.start();
audioTrack.play();
}
//Decode and play one aac_chunk
public void playAAC(byte [] data){
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
ByteBuffer[] inputBuffers = decoder.getInputBuffers();
ByteBuffer[] outputBuffers = decoder.getOutputBuffers();
int inIndex = decoder.dequeueInputBuffer(-1);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
buffer.put(data, 0, data.length);
int sampleSize = extractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
decoder.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
} else {
long presentationTimeUs = extractor.getSampleTime();
decoder.queueInputBuffer(inIndex, 0, sampleSize, presentationTimeUs, 0);
}
}
int outIndex = decoder.dequeueOutputBuffer(info, TIMEOUT);
while(outIndex >= 0){
ByteBuffer outBuffer = outputBuffers[outIndex];
byte[] decoded_chunk = new byte[info.size];
outBuffer.get(decoded_chunk); // Read the buffer all at once
outBuffer.clear();
//!! Decoded decoded_chunk.length = 0 !!
System.out.println("DECODED CHUNK SIZE: "+decoded_chunk.length);
//Instant play of the decoded chunk
audioTrack.write(decoded_chunk, info.offset, info.offset + info.size);
decoder.releaseOutputBuffer(outIndex, false);
break;
}
decoder.flush();
}
On start, MediaCodec is correctly initiated.
MediaCodec: (0xa5040280) start
MediaCodec: (0xa5040280) input buffers allocated
MediaCodec: (0xa5040280) numBuffers (4)
MediaCodec: (0xa5040280) output buffers allocated
MediaCodec: (0xa5040280) numBuffers (4)
The problem
I'm actually hearing no sound. MediaCodec is working but looks like It's not decoding anything into his Output buffers, since decoded_chunk.length = 0 and outBuffer.limit() = 0 .
Questions
Should I async fill MediaCodec input buffers? Unfortunately I didn't find anything in the examples I found about this problem: instant decode and playback.
I've follow these examples:
Decode and playback AAC file extracting media information. (link)
Same but different way to implement MediaCodec, steps defined (link)
I've solved this using MediaCodec in async mode and MediaCodec.Callback as described in the official docs here which is available only for Android minSdkVersion 21.
Basically I've used a queue for every RTP audio chunk I receive and then I'm notified every time MediaCodec buffers state change. It's actually easier to handle the decoder flow.
decoder.setCallback(new MediaCodec.Callback() {
#Override
public void onInputBufferAvailable(#NonNull MediaCodec mediaCodec, int i) {
//One InputBuffer is available to decode
while (true) {
if(queue.size() > 0) {
byte[] data = queue.removeFirst();
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
ByteBuffer buffer = mediaCodec.getInputBuffer(i);
buffer.put(data, 0, data.length);
mediaCodec.queueInputBuffer(i, 0, data.length, 0, 0);
break;
}
}
}
#Override
public void onOutputBufferAvailable(#NonNull MediaCodec mediaCodec, int i, #NonNull MediaCodec.BufferInfo info) {
//DECODING PACKET ENDED
ByteBuffer outBuffer = mediaCodec.getOutputBuffer(i);
byte[] chunk = new byte[info.size];
outBuffer.get(chunk); // Read the buffer all at once
outBuffer.clear();
audioTrack.write(chunk, info.offset, info.offset + info.size); // AudioTrack write data
mediaCodec.releaseOutputBuffer(i, false);
}
#Override
public void onError(#NonNull MediaCodec mediaCodec, #NonNull MediaCodec.CodecException e) {}
#Override
public void onOutputFormatChanged(#NonNull MediaCodec mediaCodec, #NonNull MediaFormat mediaFormat) {}
});
I tried to convert camera YUV data(NV21, got from onpreviewframe) to YUV420Semiplaner, and sent YUV420Semiplaner to mediacodec/mediamuxer for mp4 image. But I always found the size of inputBuffer was unexpected The following are my coding and logcat,
Mediacodec/Mediaformat configuration:
MediaCodecInfo codecInfo = selectCodec(MIME_TYPE_VIDEO);
int colorFormat = selectColorFormat(codecInfo, MIME_TYPE_VIDEO);
formatVideo = MediaFormat.createVideoFormat(video/avc, 1280, 720);
formatVideo.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
formatVideo.setInteger(MediaFormat.KEY_BIT_RATE, 6000000);
formatVideo.setInteger(MediaFormat.KEY_FRAME_RATE, 40);
formatVideo.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
mVideoCodec = MediaCodec.createEncoderByType(video/avc);
mVideoCodec.configure(formatVideo, null,null,MediaCodec.CONFIGURE_FLAG_ENCODE);
camera setting is below. I captured yuv from onpreviewframe and sent to codec.
Camera.Parameters parameters = camera.getParameters();
parameters.setPreviewFormat(ImageFormat.NV21);
parameters.setPreviewSize(1280, 720);
#Override
public void onPreviewFrame(final byte[] sourceYUVData, Camera camera)
{
//sent sourceYUVData to Codec
mVideoCodecInput.setByteBufferVideo(buffer, isUsingFrontCamera, Input_endOfStream);
}
//sent YUV data to Mediacodec dequeueinputbuffer
public void setByteBufferVideo(byte[] buffer, boolean isUsingFrontCamera, boolean Input_endOfStream)
{
ByteBuffer[] inputBuffers = mVideoCodec.getInputBuffers();
videoInputBufferIndex = mVideoCodec.dequeueInputBuffer(-1);
if(videoInputBufferIndex>=0) {
ByteBuffer inputBuffer = inputBuffers[videoInputBufferIndex];
inputBuffer.clear();
if (VERBOSE) {
Log.w(TAG, put_v + "[put_v] buffer length = " + buffer.length + "; inputBuffer length =" + inputBuffer.capacity());
}
//skip following code
}
I used 1280 x 720, and found when colorformat was YUV420semiplanar, the mediacodec dequeueinputbuffer size was 1425408, instead of 1382400 (1280x720x3/2). However when colorformat was YUV420Planar, the dequeue inputbuffer size is 1382400.
logcat:
[put_v] buffer length = 1382400 ; inputBuffer length =1425408
Was the size correct or i got the wrong setting? If the size for YUV420Semiplanar was correct, could someone teach me how to convert NV21 to YUV420Semiplanar with the size? Thanks a lot.
Thanks Fadden! By your suggestion, I already clarified the mismatch buffer size won't lead to pacing of frames. I found the pacing problem was gone when i commented the rotation function. Now I am trying to optimize the code flow. For the rotation function, i used Rotate YUV420/NV21 Image in android
byte[] bufferOutputRotation = new byte[buffer.length];
public void setByteBufferVideo(byte[] buffer, boolean isUsingFrontCamera, boolean Input_endOfStream){
if(Build.VERSION.SDK_INT >=18){
try{
endOfStream = Input_endOfStream;
if(!Input_endOfStream){
ByteBuffer[] inputBuffers = mVideoCodec.getInputBuffers();
videoInputBufferIndex = mVideoCodec.dequeueInputBuffer(-1);
if (VERBOSE) {
Log.w(TAG,"[put_v]:"+(put_v)+"; videoInputBufferIndex = "+videoInputBufferIndex+"; endOfStream = "+endOfStream);
}
if(videoInputBufferIndex>=0) {
ByteBuffer inputBuffer = inputBuffers[videoInputBufferIndex];
inputBuffer.clear();
if(isUsingFrontCamera){
bufferOutputRotation=buffer;
}else {
bufferOutputRotation=buffer;
//issue was gone by comment the rotation for rear camera
//rotateNV21(buffer,bufferOutputRotation,1280,720,180);
}
inputBuffer.put(mNV21Convertor.convert(bufferOutputRotation));
videoInputLength = buffer.length;
I'm working to decode a video file and then encode to a smaller size/bit rate video file. I have finished the process of decoding and getting the raw video output buffer,but when I queue the raw output buffer to the input buffer of the encoder, it throws an overflow exception. As the capacity of the input buffer is too small to hold the raw output buffer.
I find that if I configure the width and height of the output format of the encoder bigger, the capacity of both the input and output buffer of the encoder will be bigger too.And they are very near in values.And when I configure the width and height as the orignal video size,the input buffer is big enough to hold the raw output buffer of the decoder, and I get the output video file.But I want to get a smaller size and smaller bit size video.
The key code is as belows.
MediaCodecInfo codecInfo = selectCodec("video/avc");
MediaFormat outformat = MediaFormat.createVideoFormat("video/avc", 1280, 720);
int colorfmt = selectColorFormat(codecInfo, "video/avc");
outformat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorfmt);//2141391876);
outformat.setInteger(MediaFormat.KEY_BIT_RATE, 178*1024*8);
outformat.setInteger(MediaFormat.KEY_FRAME_RATE, 24);
outformat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
//outformat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 3400000);;
MediaCodec encoder = MediaCodec.createEncoderByType("video/avc");
encoder.configure(outformat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
encoder.start();
Extract video and decode
int inIndex = decoder.dequeueInputBuffer(10000);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
int sampleSize = extractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
decoder.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
isEOS = true;
} else {
decoder.queueInputBuffer(inIndex, 0, sampleSize, extractor.getSampleTime(), 0);
extractor.advance();
}
}
Encode and mux to mp4 file
int outIndex = decoder.dequeueOutputBuffer(info, 10000);
switch (outIndex) {
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
Log.d("DecodeActivity", "INFO_OUTPUT_BUFFERS_CHANGED");
outputBuffers = decoder.getOutputBuffers();
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.d("DecodeActivity", "New format " + decoder.getOutputFormat());
MediaFormat infmt = decoder.getOutputFormat();
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.d("DecodeActivity", "dequeueOutputBuffer timed out!");
break;
default:
ByteBuffer buffer = outputBuffers[outIndex];
buffer.position(info.offset);
buffer.limit(info.offset + info.size);
int encInputIndex = encoder.dequeueInputBuffer(10000);
if (encInputIndex >= 0) {
ByteBuffer encBuffer = encInputBuf[encInputIndex];
encBuffer.clear();
encBuffer.put(buffer);
encoder.queueInputBuffer(encInputIndex, 0, info.size, info.presentationTimeUs,0);
}
ByteBuffer[] encOutputBuf = encoder.getOutputBuffers();
int trackindex = 0;
while(true) {
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int encoderStatus = encoder.dequeueOutputBuffer(bufferInfo, 10000);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
break;
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not expected for an encoder
encOutputBuf = encoder.getOutputBuffers();
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat newFormat = encoder.getOutputFormat();
// now that we have the Magic Goodies, start the muxer
trackindex = muxer.addTrack(newFormat);
muxer.start();
} else if (encoderStatus < 0) {
// let's ignore it
} else {
ByteBuffer buf = encOutputBuf[encoderStatus];
muxer.writeSampleData(trackindex, buf, bufferInfo);
encoder.releaseOutputBuffer(encoderStatus, false);
}
}
#fadden Wish to discuss with you and get your help.Thanks!
Decoder in MediaCodec is not able to make resizing automatically, it always output frames in real sizes.
Encoder also encodes frames and uses real sizes as input and is not able to make resizing internally. So if you want to put frame to encoder that has sizes not equal to decoded frame you need to make resizing.
Also you can check INDE Media Pack: https://software.intel.com/en-us/intel-inde/media-pack, it has MediaComposer class, that lets you easily make video resizing
I'm trying to get all frames from video file using MediaCodec. If I try display video on SurfaceView, everything is ok. But if surface is null, and when I try get Bitmap from byte array, alwaus get null or runtime exception.
This is my code:
private class PlayerThread extends Thread {
private MediaExtractor extractor;
private MediaCodec decoder;
private Surface surface;
public PlayerThread(Surface surface) {
this.surface = surface;
}
#Override
public void run() {
extractor = new MediaExtractor();
extractor.setDataSource(videoPath);
for (int i = 0; i < extractor.getTrackCount(); i++) {
MediaFormat format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
if (mime.startsWith("video/")) {
extractor.selectTrack(i);
decoder = MediaCodec.createDecoderByType(mime);
decoder.configure(format, /*surface*/ null, null, 0);
break;
}
}
if (decoder == null) {
Log.e("DecodeActivity", "Can't find video info!");
return;
}
decoder.start();
ByteBuffer[] inputBuffers = decoder.getInputBuffers();
ByteBuffer[] outputBuffers = decoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
boolean isEOS = false;
while (!Thread.interrupted()) {
++numFrames;
if (!isEOS) {
int inIndex = decoder.dequeueInputBuffer(10000);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
int sampleSize = extractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
// We shouldn't stop the playback at this point,
// just pass the EOS
// flag to decoder, we will get it again from the
// dequeueOutputBuffer
Log.d("DecodeActivity",
"InputBuffer BUFFER_FLAG_END_OF_STREAM");
decoder.queueInputBuffer(inIndex, 0, 0, 0,
MediaCodec.BUFFER_FLAG_END_OF_STREAM);
isEOS = true;
} else {
decoder.queueInputBuffer(inIndex, 0, sampleSize,
extractor.getSampleTime(), 0);
extractor.advance();
}
}
}
int outIndex = decoder.dequeueOutputBuffer(info, 10000);
switch (outIndex) {
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
Log.d("DecodeActivity", "INFO_OUTPUT_BUFFERS_CHANGED");
outputBuffers = decoder.getOutputBuffers();
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.d("DecodeActivity",
"New format " + decoder.getOutputFormat());
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.d("DecodeActivity", "dequeueOutputBuffer timed out!");
break;
default:
// I tried get Bitmap on few ways
//1.
//ByteBuffer buffer = outputBuffers[outIndex];
//byte[] ba = new byte[buffer.remaining()];
//buffer.get(ba);
//final Bitmap bmp = BitmapFactory.decodeByteArray(ba, 0, ba.length);// this return null
//2.
//ByteBuffer buffer = outputBuffers[outIndex];
//final Bitmap bmp = Bitmap.createBitmap(1920, 1080, Config.ARGB_8888);//using MediaFormat object I know width and height
//int a = bmp.getByteCount(); //8294400
//buffer.rewind();
//int b = buffer.capacity();//3137536
//int c = buffer.remaining();//3137536
//bmp.copyPixelsFromBuffer(buffer); // java.lang.RuntimeException: Buffer not large enough for pixels
//I know what exception mean, but i don't know why xception occurs
//In this place I need bitmap
// We use a very simple clock to keep the video FPS, or the
// video
// playback will be too fast
while (info.presentationTimeUs / 1000 > System
.currentTimeMillis() - startMs) {
try {
sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
break;
}
}
decoder.releaseOutputBuffer(outIndex, true);
break;
}
// All decoded frames have been rendered, we can stop playing
// now
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d("DecodeActivity",
"OutputBuffer BUFFER_FLAG_END_OF_STREAM");
break;
}
}
decoder.stop();
decoder.release();
extractor.release();
}
}
I have no idea how to solve it.
Samos
There are a couple of problems with your code (or, arguably, with MediaCodec).
First, the ByteBuffer handling in MediaCodec is poor, so you have to manually set the buffer parameters from the values in the BufferInfo object that is filled out by dequeueOutputBuffer().
Second, the values that come out of the MediaCodec are in YUV format, not RGB. As of Android 4.4, the Android framework does not provide a function that will convert the output to Bitmap. You will need to provide your own YUV-to-RGB converters (plural -- there are multiple formats). Some devices use proprietary, undocumented color formats.
You can see an example of extracting and examining MediaCodec decoder buffer contents in the CTS EncodeDecodeTest buffer-to-buffer methods (e.g. checkFrame()).
A more reliable way to go about this is to go back to decoding to a Surface, but extract the pixels with OpenGL ES. The bigflake ExtractMpegFramesTest shows how to do this.
The short answer is:
In the default section of your switch statement, you need to reset the ByteBuffer position, so instead of:
ByteBuffer buffer = outputBuffers[outIndex];
byte[] ba = new byte[buffer.remaining()];
buffer.get(ba);
you should have something like
ByteBuffer buffer = outputBuffers[outIndex];
buffer.position(info.offset);
buffer.limit(info.offset + info.size);
byte[] ba = new byte[buffer.remaining()];
buffer.get(ba);
In your original code, you will find that your ByteBuffer has a remaining() of 0.