media codec sample implementation in android 4.1 - android

I am trying to display video buffers on an android. I am using the media codec API released in Android 4.1 Jelly Bean.
The sample goes like this:
MediaCodec codec = MediaCodec.createDecoderByType(type);
codec.configure(format, ...);
configure method accepts 3 other arguments, apart from MediaFormat. I have been able to figure out MediaFormat somehow but I am not sure about the other 3 parameters. (below).
MediaSurface, MediaCrypto and Flags.
Any leads?
Also, what should I do with the MediaCrypto argument, if I am not encrypting my video buffers.
Requirements:
1) Decode the buffers on the android device,
2) Display them on the screen.

You can see the article from here:
http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/

Just for completeness:
To decode -
MediaSurface is the surface to render the frame to ( or null if not rendering )
MediaCrypto should be null if the is no encryption
flags == 0 if decoding or MediaCodec.CONFIGURE_FLAG_ENCODE if encoding

Related

Decode H.264 stream using MediaCodec API JNI

I am developing a H.264 decoder using MediaCodec API. I am trying to call MediaCodec java API in JNI layer inside a function like:
void Decompress(const unsigned char *encodedInputdata, unsigned int inputLength, unsigned char **outputDecodedData, int &width, int &height) {
// encodedInputdata is encoded H.264 remote stream
// .....
// outputDecodedData = call JNI function of MediaCodec Java API to decode
// .....
}
Later I will send the outputDecodedData to my existing video rendering pipeline and render on Surface.
I hope I will be able to write a Java function to decode the input stream, but these would be challenge -
This resource states that -
...you can't do anything with the decoded video frame but render them
to surface
Here a Surface has been passed decoder.configure(format, surface, null, 0) to render the output ByteBuffer on the surface and claimed We can't use this buffer but render it due to the API limit.
So, will I able to send the output ByteBuffer to native layer to cast as unsigned char* and pass to my rendering pipeline instead of passing a Surface ot configure()?
I see two fundamental problems with your proposed function definition.
First, MediaCodec operates on access units (NAL units for H.264), not arbitrary chunks of data from a stream, so you need to pass in one NAL unit at a time. Once the chunk is received, the codec may want to wait for additional frames to arrive before producing any output. You cannot in general pass in one frame of input and wait to receive one frame of output.
Second, as you noted, the ByteBuffer output is YUV-encoded in one of several color formats. The format varies from device to device; Qualcomm devices notably use their own proprietary format. (It has been reverse-engineered, though, so if you search around you can find some code to unravel it.)
The common workaround is to send the video frames to a SurfaceTexture, which converts them to GLES "external" textures. These can be manipulated in various ways, or rendered to a pbuffer and extracted with glReadPixels().

Aac encoder using mediacodec was initialized with one channel but outputs as two channels

The aac decoder is initialized as below:
MediaFormat outfmt = new MediaFormat();
outfmt.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
outfmt.setInteger(MediaFormat.KEY_AAC_PROFILE, mAudioProfile);
mSampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
outfmt.setInteger(MediaFormat.KEY_SAMPLE_RATE, mSampleRate);
mChannels = format.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
outfmt.setInteger(MediaFormat.KEY_CHANNEL_COUNT, mChannels);
outfmt.setInteger(MediaFormat.KEY_BIT_RATE, 64000);
audioEncoder.configure(outfmt, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
audioEncoder.start();
But the encoder behaviors different on two devices.
One outputs normal presentation:
64000 128000 192000 256000 320000
Another outputs as two channels:
64000 64000 128000 128000 192000 192000 256000 256000 320000 320000
And the format extracted using MediaExtractor is different on two devices:
the normal one is
{max-input-size=1572864, aac-profile=2,
csd-0=java.nio.ByteArrayBuffer[position=0,limit=2,capacity=2], sample-rate=16000,
durationUs=8640000, channel-count=1, mime=audio/mp4a-latm, isDMCMMExtractor=1}
The other is
{max-input-size=798, durationUs=8640000, channel-count=1, mime=audio/mp4a-latm,
csd-0=java.nio.ByteArrayBuffer[position=0,limit=2,capacity=2], sample-rate=16000}
So the original audio has one channel and the encoder is configured with one channel too.But the encoder outputs as in two channel way.
Does it matter with isDMCMMExtractor flag?
Help!Help!
#fadden
First off, the question is very hard to understand - both of the listed MediaFormat contents show channel-count=1, so there's very little actual explanation of the issue itself, only an explanation of other surrounding details.
However - the software AAC decoder in some android versions (4.1 if I remember correctly, possibly 4.2 as well) will decode mono AAC into stereo - not sure if some of the hardware AAC decoders do the same. You can argue whether this is a bug or just unexpected behaviour, but it's something you have to live with. In the case that the decoder returns stereo data even though the input was mono, both stereo channels will have the same (mono) content.
So basically, you have to be prepared to handle this - either pass the actual format information from the decoder (not from MediaExtractor) to whoever is using the data (e.g. reconfigure the audio output to stereo), or be prepared to mix down stereo back into mono if you really need to have the output in mono format.

Set AVC/H.264 profile when encoding video in Android using MediaCodec API

Android provides a way to query the supported encoding profiles. But when setting up an encoder I cannot find the way to specify the desired profile to be used.
Finding supported profile/level pairs
Using the MediaCodec API in android you can call getCodecInfo() once you have chosen an encoder component. This returns a MediaCodecInfo object which provides details about the codec component being used. getCapabilitiesForType() returns a CodecCapabilities object, detailing what the codec is capable of. This contains an array of CodecProfileLevels which detail the supported profiles and levels which are supported.
Trying to set the profile
I can't see a field to set the profile for MedieCodec or for the MediaFormat.
There is a KEY_AAC_PROFILE for MediaFormat, but in the reference it explicitly states this is for aac audio only.
In MediaCodec there is a way to pass a Bundle of extra parameters to the codec using setParameters(). This looks like it has no generic documentation, and the parameters which are accepted will be different between codecs (different devices).
Background
Profiles specify a set of encoding features to use. Simple profiles are less computationally intense, but generally sacrifice quality for a given bit rate as a result. The levels specify the maximum resolution / bit-rate which are supported for a given profile. I expected levels usually to be associated with decoding capability, but since it is describing a hardware encoder which has to run in real time, having a maximum setting makes sense to me.
META: (I originally had a lot more links to each class + function I mentioned, but I had to remove them because I don't yet have the rep to post more than 2 links.)
For Android Kitkat devices we can set desired AVC profile and level into media format as per code snippet below(sets baseline with level 1.3). Please set this before starting MediaCodec.
format.setInteger(MediaFormat.KEY_PROFILE, MediaCodecInfo.CodecProfileLevel.AVCProfileBaseline);
format.setInteger(MediaFormat.KEY_LEVEL, MediaCodecInfo.CodecProfileLevel.AVCLevel13);
There is no explicit profile selection in the MediaCodec API. Codec will select a profile on its own. It will also select a level based on input width/height and frame-rate numbers, but it may select a level that is higher than minimum required for the configuration.
You can query the encoder codec for the supported levels to see which profiles it may generate, but you will not be able to select a preferred one. Most likely the codec will select the highest profile it can handle for the given level, but there is no guarantee for that.
Starting with Android 5.0, it is possible to set profile (SDK Level 21) and level (SDK Level 23).
http://developer.android.com/reference/android/media/MediaFormat.html#KEY_PROFILE
http://developer.android.com/reference/android/media/MediaFormat.html#KEY_LEVEL
Haven't yet tried how well these work in practice.
Profile and Level hints can be ignored by a given AVC encoder.
To be absolutely sure which profile and level have been selected by the AVC encoder, you can query the SPS in realtime from its MediaFormat after it has started (like during the initial format change event).
See Section 7 of the spec.
This poster shows to pull the profile and level from these base64 strings (please upvote him!):
Can profile-level-id and sprop-parameter-sets be extracted from an RTP stream?
Here are the Base64 encoded SPS and PPS:
...
mIndex = mMediaEncoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT);
// First buffer encoded is the NAL frame metadata
if (mIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED)
{
Log.i("PPS: "+getParameterSetAsString(mMediaEncoder, "csd-0");
Log.i("SPS: "+getParameterSetAsString(mMediaEncoder, "csd-1");
}
...
private String getParameterSetAsString(MediaCodec encoder, String csd)
{
MediaFormat mMediaFormat = encoder.getOutputFormat();
ByteBuffer ps = mMediaFormat.getByteBuffer(csd);
// The actual SPS and PPS byte codes
byte[] mPS = new byte[ps.capacity()-4];
ps.position(4);
ps.get(mPS,0,mPS.length);
//Covert to String
return Base64.encodeToString(mPS, 0, mPS.length, Base64.NO_WRAP);
}

MediaCodec encoding ignores my BUFFER_FLAG_SYNC_FRAME flag

In my Android application, I am encoding some media in webm (vp8) format using MediaCodec. The encoding is working as expected. However, I need to ensure that I create a sync frame once in a while. Here is what I do:
encoder.queueInputBuffer(..., MediaCodec.BUFFER_FLAG_SYNC_FRAME);
Later in the code, I check for sync frame:
encoder.dequeueOutputBuffer(bufferInfo, 0);
boolean isSyncFrame = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_SYNC_FRAME);
The problem is that isSyncFrame never gets a true value.
I am wondering if I am making a mistake in my encoding configuration. May be there is a better way to tell the encoder to create a sync frame once in a while.
I hope it is not a bug in MediaCodec. Thank you in advance for your help.
There is no (current as of Android 4.3) way to request an on-demand sync frame using MediaCodec encoders. This is partly due to OMX, the underlying codec implementation in Android, that does not provide a way to specify which input frame should be encoded as a sync frame; although it has a way to trigger a sync frame "in the near future".
feisal's answer is the only currently supported way to control sync frames, but you have to do it at configuration time.
==edit re: jesup
You can trigger a sync frame in the near future using MediaCodec.setParameter:
Bundle params = new Bundle();
params.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME, 0);
mCodec.setParameters(syncFrame);
Unfortunately, there is no (reliable) way to tell in MediaCodec if an encoded buffer is a sync frame other than doing it on your own by inspecting the byte-codes.
you can set the rate of I-frames in the MediaFormat object of your encoder by setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, int secs_between_iframes );

writing custom codecs for android using FFmpeg

I am doing a video compression project for Android and I am thinking of implementing it by designing a new video codec (by scratch , I have designed the algorithm) . I have already read the basics of video compression , related relevant algorithms and codec basics . I have also found that FFmpeg may serve as a quite good solution on Android.
Now my questions come:
How to write a new video codec as in FFmpeg? I am still a beginner at writing codecs , but
how do I start ? I have a rough idea that that you have to write at least a demuxer first and then the specific encoder and decoder etc . (Asking for references here please.)
Since my codec deosn't simply adjust video properties like fps , resolution , bit-rate etc.
Is reading the MediaCodec API and MediaPlayer API in official Android SDK enough for writing new codecs ? (Because last time I saw it had only support for MPEG-4 SP , H.263 and H.264 . I was unable to find if you could directly write your own classes and functions).
Thanks .
You can use ffmpeg as a tool or the ffmpeg set of libraries (libavcodec, libaviformat, …) on Android. You can add or change ffmpeg codecs in a cross- platform manner, because this project puts a strong emphasis on platform independence. You can use the MediaCodec API instead. But there is no way to extend the MediaCodec API (update it is possible to extend MediaCodec, it is documented at http://source.android.com/devices/media.html#codecs ) and no easy way to let ffmpeg use this API.
if you are a newb and "just want to do it in SW", than just do it in SW. I am assuming your algorithm does not need to be real-time, and compress video data on the fly, or you would need to use a HW codec.
This is from Android MediaCodec Reference
MediaCodec codec = MediaCodec.createDecoderByType(type);
codec.configure(format, ...);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
for (;;) {
int inputBufferIndex = codec.dequeueInputBuffer(timeoutUs);
if (inputBufferIndex >= 0) {
// fill inputBuffers[inputBufferIndex] with valid data
...
codec.queueInputBuffer(inputBufferIndex, ...);
}
int outputBufferIndex = codec.dequeueOutputBuffer(timeoutUs);
if (outputBufferIndex >= 0) {
// outputBuffer is ready to be processed or rendered.
...
codec.releaseOutputBuffer(outputBufferIndex, ...);
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
outputBuffers = codec.getOutputBuffers();
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// Subsequent data will conform to new format.
MediaFormat format = codec.getOutputFormat();
...
}
}
codec.stop();
codec.release();
codec = null;
On the line that reads "// outputBuffer is ready to be processed or rendered" apply your codec.
That is your first frame will be outputBuffers[0] to outputBuffers[outputBufferIndex]. Save off outputBufferIndex, i.e. outputBufferIndex_old = outputBufferIndex; then your next frame will be outputbuffers[outputBufferIndex_old] to outputbuffers[outputBufferIndex]. But this is a circular buffer, so in the for loop ... ahhhhh
something like this:
//init
int old = 0;
int len = codec.BufferInfo().size,buff_len=outputBuffers.size;
Byte[] processBuffer = new Byte[len];
... // outputBuffer ready
for (int i=old; i<old+len; i++){
processBuffer[i-old] = outputBuffers[i%buff_len];
}
old = outputBufferIndex;
Here is a good example. You may want to look into MediaMetadataRetriever to get information about the input video. height and width ect. bytesize per pixel, if you want your encoder to be robust to different types of video. Anyway, that should get you started.
I strongly recommend Matlab(or GNU Octave) for prototyping a video codec. It will save you a ton of time. Meaning you should make sure your intended codec algorithm works before trying to implement it on a near impossible system to debug like Android.
Hope this helps.
If someone stumbles across this old question the answer is:
Write your Program.
Where you want the "Codec" to go simply add a 'null Codec' (copy Input to Output).
Test that your Program still works and that you can read the (so-called) encoded File.
Add your Codec where the 'null Codec' was (call a Function to avoid big edits to a working File).
Re-Test your Program to ensure it still works and read the Output to make sure it is correct.
That is all. ;)
Things to consider:
A "Video Player" can drop Frames, a "Video Recorder" had better NOT
drop Frames.
A 'Software Codec' (no Hardware assist) will be slow,
run it on a different Core, if available.
A Hardware Codec (called from Software) will be necessary unless you are just making a
Demo.
Split your Program into pieces that can run separately so it can be threaded and those Threads can be assigned to different Cores. You will need to detect the number of Cores and assess their speed so you can do some of the partitioning dynamically at Runtime.
Use of the NDK and Assembly Language Programming will be necessary to get enough speed to compress a decent sized Video at a wanted frame rate (IE: you do not want your finished Program to only support 320x176 # 5 FPS Videos). The Compressor MUST run faster than it's Input arrives.
Designing your own Codec to beat an existing Codec (x265) will take you years (without help).
If your a Wiz at Java, C, and ARM Assembly (and a Software Engineer) it will take more than a couple of months of work; so commit or quit. Try to find some Open Source as a base to start from.

Categories

Resources