In my Android application, I am encoding some media in webm (vp8) format using MediaCodec. The encoding is working as expected. However, I need to ensure that I create a sync frame once in a while. Here is what I do:
encoder.queueInputBuffer(..., MediaCodec.BUFFER_FLAG_SYNC_FRAME);
Later in the code, I check for sync frame:
encoder.dequeueOutputBuffer(bufferInfo, 0);
boolean isSyncFrame = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_SYNC_FRAME);
The problem is that isSyncFrame never gets a true value.
I am wondering if I am making a mistake in my encoding configuration. May be there is a better way to tell the encoder to create a sync frame once in a while.
I hope it is not a bug in MediaCodec. Thank you in advance for your help.
There is no (current as of Android 4.3) way to request an on-demand sync frame using MediaCodec encoders. This is partly due to OMX, the underlying codec implementation in Android, that does not provide a way to specify which input frame should be encoded as a sync frame; although it has a way to trigger a sync frame "in the near future".
feisal's answer is the only currently supported way to control sync frames, but you have to do it at configuration time.
==edit re: jesup
You can trigger a sync frame in the near future using MediaCodec.setParameter:
Bundle params = new Bundle();
params.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME, 0);
mCodec.setParameters(syncFrame);
Unfortunately, there is no (reliable) way to tell in MediaCodec if an encoded buffer is a sync frame other than doing it on your own by inspecting the byte-codes.
you can set the rate of I-frames in the MediaFormat object of your encoder by setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, int secs_between_iframes );
Related
I am trying to decode a video from a file and encode it into a different format with MediaCodec in the new Asynchronous Mode supported in API Level 21 and up (Android OS 5.0 Lollipop).
There are many examples for doing this in Synchronous Mode on sites such as Big Flake, Google's Grafika, and dozens of answers on StackOverflow, but none of them support Asynchronous mode.
I do not need to display the video during the process.
I believe that the general procedure is to read the file with a MediaExtractor as the input to a MediaCodec(decoder), allow the output of the Decoder to render into a Surface that is also the shared input into a MediaCodec(encoder), and then finally to write the Encoder output file via a MediaMuxer. The Surface is created during setup of the Encoder and shared with the Decoder.
I can Decode the video into a TextureView, but sharing the Surface with the Encoder instead of the screen has not been successful.
I setup MediaCodec.Callback()s for both of my codecs. I believe that an issues is that I do not know what to do in the Encoder's callback's onInputBufferAvailable() function. I do not what to (or know how to) copy data from the Surface into the Encoder - that should happen automatically (as is done on the Decoder output with codec.releaseOutputBuffer(outputBufferId, true);). Yet, I believe that onInputBufferAvailable requires a call to codec.queueInputBuffer in order to function. I just don't know how to set the parameters without getting data from something like a MediaExtractor as used on the Decode side.
If you have an Example that opens up a video file, decodes it, encodes it to a different resolution or format using the asynchronous MediaCodec callbacks, and then saves it as a file, please share your sample code.
=== EDIT ===
Here is a working example in synchronous mode of what I am trying to do in asynchronous mode: ExtractDecodeEditEncodeMuxTest.java: https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java This example is working in my application
I believe you shouldn't need to do anything in the encoder's onInputBufferAvailable() callback - you should not call encoder.queueInputBuffer(). Just as you never call encoder.dequeueInputBuffer() and encoder.queueInputBuffer() manually when doing Surface input encoding in synchronous mode, you shouldn't do it in asynchronous mode either.
When you call decoder.releaseOutputBuffer(outputBufferId, true); (in both synchronous and asynchronous mode), this internally (using the Surface you provided) dequeues an input buffer from the surface, renders the output into it, and enqueues it back to the surface (to the encoder). The only difference between synchronous and asynchronous mode is in how the buffer events are exposed in the public API, but when using Surface input, it uses a different (internal) API to access the same, so synchronous vs asynchronous mode shouldn't matter for this at all.
So as far as I know (although I haven't tried it myself), you should just leave the onInputBufferAvailable() callback empty for the encoder.
EDIT:
So, I tried doing this myself, and it's (almost) as simple as described above.
If the encoder input surface is configured directly as output to the decoder (with no SurfaceTexture inbetween), things just work, with a synchronous decode-encode loop converted into an asynchronous one.
If you use SurfaceTexture, however, you may run into a small gotcha. There is an issue with how one waits for frames to arrive to the SurfaceTexture in relation to the calling thread, see https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecodeEditEncodeTest.java#106 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#104 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#113 for references to this.
The issue, as far as I see it, is in awaitNewImage as in https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#240. If the onFrameAvailable callback is supposed to be called on the main thread, we have an issue if the awaitNewImage call also is run on the main thread. If the onOutputBufferAvailable callbacks also are called on the main thread and you call awaitNewImage from there, we have an issue, since you'll end up waiting for a callback (with a wait() that blocks the whole thread) that can't be run until the current method returns.
So we need to make sure that the onFrameAvailable callbacks come on a different thread than the one that calls awaitNewImage. One pretty simple way of doing this is to create a new separate thread, that does nothing but service the onFrameAvailable callbacks. To do that, you can do e.g. this:
private HandlerThread mHandlerThread = new HandlerThread("CallbackThread");
private Handler mHandler;
...
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
...
mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);
I hope this is enough for you to be able to solve your issue, let me know if you need me to edit one of the public examples to implement asynchronous callbacks there.
EDIT2:
Also, since the GL rendering might be done from within the onOutputBufferAvailable callback, this might be a different thread than the one that set up the EGL context. So in that case, one needs to release the EGL context in the thread that set it up, like this:
mEGL.eglMakeCurrent(mEGLDisplay, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT);
And reattach it in the other thread before rendering:
mEGL.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
EDIT3:
Additionally, if the encoder and decoder callbacks are received on the same thread, the decoder onOutputBufferAvailable that does rendering can block the encoder callbacks from being delivered. If they aren't delivered, the rendering can be blocked infinitely since the encoder don't get the output buffers returned. This can be fixed by making sure the video decoder callbacks are received on a different thread instead, and this avoids the issue with the onFrameAvailable callback instead.
I tried implementing all this on top of ExtractDecodeEditEncodeMuxTest, and got it working seemingly fine, have a look at https://github.com/mstorsjo/android-decodeencodetest. I initially imported the unchanged test, and did the conversion to asynchronous mode and fixes for the tricky details separately, to make it easy to look at the individual fixes in the commit log.
Can also set the Handler in the MediaEncoder.
---> AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
MyAudioCodecWrapper myMediaCodecWrapper;
public MyAudioEncoder(long startRecordWhenNs){
super.startRecordWhenNs = startRecordWhenNs;
}
#RequiresApi(api = Build.VERSION_CODES.M)
public MyAudioCodecWrapper prepareAudioEncoder(AudioRecord _audioRecord , int aacSamplePreFrameSize) throws Exception{
if(_audioRecord==null || aacSamplePreFrameSize<=0)
throw new Exception();
audioRecord = _audioRecord;
Log.d(TAG, "audioRecord:" + audioRecord.getAudioFormat() + ",aacSamplePreFrameSize:" + aacSamplePreFrameSize);
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
MediaFormat audioFormat = new MediaFormat();
audioFormat.setString(MediaFormat.KEY_MIME, MIMETYPE_AUDIO_AAC);
//audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE );
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_SAMPLE_RATE, audioRecord.getSampleRate());//44100
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, audioRecord.getChannelCount());//1(單身道)
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 128000);
audioFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 16384);
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE_AUDIO_AAC);
codec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.setCallback(new AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
//codec.start();
MyAudioCodecWrapper myMediaCodecWrapper = new MyAudioCodecWrapper();
myMediaCodecWrapper.mediaCodec = codec;
super.mediaCodec = codec;
return myMediaCodecWrapper;
}
I'm developing an Android video app where I need to get the current frame number of the video being displayed while in pause mode.
I need to send my Server the frame number currently paused in video and get back a list of items regarding that frame/time, right now I'm sending the current paused time in milliseconds, but it doesn't work quite well, because the Server compare the time sent to a specific frame it calculated, based on the time, but sometimes the comparison is not exact.
I know you can get a bitmap from that frame if you use MediaMetaDataRetriever, and I did it but it returns bitmap image and what I need is an index.
I'm using ExoPlayer (I need that feature for MP4 and for HLS, too, if that matters).
Is there a way to get that info from the video?
I post a solution to my problem, In order to get the exact frame time I simply extended MediaCodecVideoTrackRenderer.java class from ExoPlayer library and used the value of lastOutputBufferTimestamp which is in function:
#Override
protected boolean processOutputBuffer(long positionUs, long elapsedRealtimeUs,
MediaCodec codec, ByteBuffer buffer, MediaCodec.BufferInfo bufferInfo, int bufferIndex,
boolean shouldSkip) {
boolean processed = super.processOutputBuffer(positionUs, elapsedRealtimeUs, codec, buffer,
bufferInfo, bufferIndex, shouldSkip);
if (!shouldSkip && processed) {
lastOutputBufferTimestamp = bufferInfo.presentationTimeUs;
}
return processed;
}
It does give me the exact time and not a rounded time from, last say, mPlayer.getDuration() or something like that.
If you have a constant FPS in your video you can calculate that by division and get the number of the frame.
It was simply enough for me to know the exact frame time.
I'm using ExoPlayer version r1.5.3 so I don't know if this solution will work for newer version since code has probably changed.
I am doing a video compression project for Android and I am thinking of implementing it by designing a new video codec (by scratch , I have designed the algorithm) . I have already read the basics of video compression , related relevant algorithms and codec basics . I have also found that FFmpeg may serve as a quite good solution on Android.
Now my questions come:
How to write a new video codec as in FFmpeg? I am still a beginner at writing codecs , but
how do I start ? I have a rough idea that that you have to write at least a demuxer first and then the specific encoder and decoder etc . (Asking for references here please.)
Since my codec deosn't simply adjust video properties like fps , resolution , bit-rate etc.
Is reading the MediaCodec API and MediaPlayer API in official Android SDK enough for writing new codecs ? (Because last time I saw it had only support for MPEG-4 SP , H.263 and H.264 . I was unable to find if you could directly write your own classes and functions).
Thanks .
You can use ffmpeg as a tool or the ffmpeg set of libraries (libavcodec, libaviformat, …) on Android. You can add or change ffmpeg codecs in a cross- platform manner, because this project puts a strong emphasis on platform independence. You can use the MediaCodec API instead. But there is no way to extend the MediaCodec API (update it is possible to extend MediaCodec, it is documented at http://source.android.com/devices/media.html#codecs ) and no easy way to let ffmpeg use this API.
if you are a newb and "just want to do it in SW", than just do it in SW. I am assuming your algorithm does not need to be real-time, and compress video data on the fly, or you would need to use a HW codec.
This is from Android MediaCodec Reference
MediaCodec codec = MediaCodec.createDecoderByType(type);
codec.configure(format, ...);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
for (;;) {
int inputBufferIndex = codec.dequeueInputBuffer(timeoutUs);
if (inputBufferIndex >= 0) {
// fill inputBuffers[inputBufferIndex] with valid data
...
codec.queueInputBuffer(inputBufferIndex, ...);
}
int outputBufferIndex = codec.dequeueOutputBuffer(timeoutUs);
if (outputBufferIndex >= 0) {
// outputBuffer is ready to be processed or rendered.
...
codec.releaseOutputBuffer(outputBufferIndex, ...);
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
outputBuffers = codec.getOutputBuffers();
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// Subsequent data will conform to new format.
MediaFormat format = codec.getOutputFormat();
...
}
}
codec.stop();
codec.release();
codec = null;
On the line that reads "// outputBuffer is ready to be processed or rendered" apply your codec.
That is your first frame will be outputBuffers[0] to outputBuffers[outputBufferIndex]. Save off outputBufferIndex, i.e. outputBufferIndex_old = outputBufferIndex; then your next frame will be outputbuffers[outputBufferIndex_old] to outputbuffers[outputBufferIndex]. But this is a circular buffer, so in the for loop ... ahhhhh
something like this:
//init
int old = 0;
int len = codec.BufferInfo().size,buff_len=outputBuffers.size;
Byte[] processBuffer = new Byte[len];
... // outputBuffer ready
for (int i=old; i<old+len; i++){
processBuffer[i-old] = outputBuffers[i%buff_len];
}
old = outputBufferIndex;
Here is a good example. You may want to look into MediaMetadataRetriever to get information about the input video. height and width ect. bytesize per pixel, if you want your encoder to be robust to different types of video. Anyway, that should get you started.
I strongly recommend Matlab(or GNU Octave) for prototyping a video codec. It will save you a ton of time. Meaning you should make sure your intended codec algorithm works before trying to implement it on a near impossible system to debug like Android.
Hope this helps.
If someone stumbles across this old question the answer is:
Write your Program.
Where you want the "Codec" to go simply add a 'null Codec' (copy Input to Output).
Test that your Program still works and that you can read the (so-called) encoded File.
Add your Codec where the 'null Codec' was (call a Function to avoid big edits to a working File).
Re-Test your Program to ensure it still works and read the Output to make sure it is correct.
That is all. ;)
Things to consider:
A "Video Player" can drop Frames, a "Video Recorder" had better NOT
drop Frames.
A 'Software Codec' (no Hardware assist) will be slow,
run it on a different Core, if available.
A Hardware Codec (called from Software) will be necessary unless you are just making a
Demo.
Split your Program into pieces that can run separately so it can be threaded and those Threads can be assigned to different Cores. You will need to detect the number of Cores and assess their speed so you can do some of the partitioning dynamically at Runtime.
Use of the NDK and Assembly Language Programming will be necessary to get enough speed to compress a decent sized Video at a wanted frame rate (IE: you do not want your finished Program to only support 320x176 # 5 FPS Videos). The Compressor MUST run faster than it's Input arrives.
Designing your own Codec to beat an existing Codec (x265) will take you years (without help).
If your a Wiz at Java, C, and ARM Assembly (and a Software Engineer) it will take more than a couple of months of work; so commit or quit. Try to find some Open Source as a base to start from.
I am using the new MediaCodec API on Jelly Bean to decode an h264 stream.
Using the code snippets in the developer page , instantiated a decoder by name (taken from media_codec.xml), passed a surface and configured the codec.
The problem I am facing is, dequeOutputBuffer always returns -1.
Tried with a negative timeout to wait indefenitely, no luck with that.
Whenever I get a -1, refreshed the buffers using getOutputBuffers.
Please note that the same issue is seen when a custom app is used to parse the data from a media source and provide to decoder.
Any inputs on the above will be helpful
I had faced same problem. Incrementing presentationTimeUs parameter of queueInputBuffer() on each call solved the issue.
For example,
codec.queueInputBuffer(inputBufferIndex, 0, data.size, time, 0)
time += 66 //incrementing by 1 works too
If anyone else is facing this problem (as I did today) while starting with MediaCodec make sure to release the output codecs after you're done with them:
mediaCodec.releaseOutputBuffer(index, render);
or else the codec will run out of available buffers pretty soon.
It may be necessary to feed several input buffers before obtaining data in output buffer.
-1 is INFO_TRY_AGAIN_LATER, meaning the output buffer queue is still being prepared and you just need to call dequeueOutputBuffer again.
Try using a work loop that calls dequeueOutputBuffer in a loop similar to ExoPlayer:
while (drainOutputBuffer(positionUs, elapsedRealtimeUs)) {}
if (feedInputBuffer(true)) {
while (feedInputBuffer(false)) {}
}
where drainOutputBuffer is a method that calls dequeueOutputBuffer.
I am trying to display video buffers on an android. I am using the media codec API released in Android 4.1 Jelly Bean.
The sample goes like this:
MediaCodec codec = MediaCodec.createDecoderByType(type);
codec.configure(format, ...);
configure method accepts 3 other arguments, apart from MediaFormat. I have been able to figure out MediaFormat somehow but I am not sure about the other 3 parameters. (below).
MediaSurface, MediaCrypto and Flags.
Any leads?
Also, what should I do with the MediaCrypto argument, if I am not encrypting my video buffers.
Requirements:
1) Decode the buffers on the android device,
2) Display them on the screen.
You can see the article from here:
http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/
Just for completeness:
To decode -
MediaSurface is the surface to render the frame to ( or null if not rendering )
MediaCrypto should be null if the is no encryption
flags == 0 if decoding or MediaCodec.CONFIGURE_FLAG_ENCODE if encoding