SurfaceTexture's onFrameAvailable() method always called too late - android

I'm trying to get the following MediaExtractor example to work:
http://bigflake.com/mediacodec/ - ExtractMpegFramesTest.java (requires 4.1, API 16)
The problem I have is that outputSurface.awaitNewImage(); seems to always throw RuntimeException("frame wait timed out"), which is thrown whenever the mFrameSyncObject.wait(TIMEOUT_MS) call times out. No matter what I set TIMEOUT_MS to be, onFrameAvailable() always gets called right after the timeout occurs. I tried with 50ms and with 30000ms and it's the same.
It seems like the onFrameAvailable() call can't be done while the thread is busy, and once the timeout happens which ends the thread code execution, it can parse the onFrameAvailable() call.
Has anyone managed to get this example to work, or knows how MediaExtractor is supposed to work with GL textures?
Edit: tried this on devices with API 4.4 and 4.1.1 and the same happens on both.
Edit 2:
Got it working on 4.4 thanks to fadden. The issue was that the ExtractMpegFramesWrapper.runTest() method called th.join(); which blocked the main thread and prevented the onFrameAvailable() call from being processed. Once I commented th.join(); it works on 4.4. I guess maybe the ExtractMpegFramesWrapper.runTest() itself was supposed to run on yet another thread so the main thread didn't get blocked.
There was also a small issue on 4.1.2 when calling codec.configure(), it gave the error:
A/ACodec(2566): frameworks/av/media/libstagefright/ACodec.cpp:1041 CHECK(def.nBufferSize >= size) failed.
A/libc(2566): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 2625 (CodecLooper)
Which I solved by adding the following before the call:
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
However the problem I have now on both 4.1.1 (Galaxy S2 GT-I9100) and 4.1.2 (Samsung Galaxy Tab GT-P3110) is that they both always set info.size to 0 for all frames. Here is the log output:
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
submitted frame 0 to dec, size=20562
no output from decoder available
loop
submitted frame 1 to dec, size=7193
no output from decoder available
loop
[... skipped 18 lines ...]
submitted frame 8 to dec, size=6531
no output from decoder available
loop
submitted frame 9 to dec, size=5639
decoder output format changed: {height=240, what=1869968451, color-format=19, slice-height=240, crop-left=0, width=320, crop-bottom=239, crop-top=0, mime=video/raw, stride=320, crop-right=319}
loop
submitted frame 10 to dec, size=6272
surface decoder given buffer 0 (size=0)
loop
[... skipped 1211 lines ...]
submitted frame 409 to dec, size=456
surface decoder given buffer 1 (size=0)
loop
sent input EOS
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
[... skipped 27 lines all with size=0 ...]
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
output EOS
Saving 0 frames took ? us per frame // edited to avoid division-by-zero error
So no images get saved. However the same code and video works on 4.3. The video I am using is an .mp4 file with "H264 - MPEG-4 AVC (avc1)" video codec and "MPEG AAAC Audio (mp4a)" audio codec.
I also tried other video formats, but they seem to die even sooner on 4.1.x, while both work on 4.3.
Edit 3:
I did as you suggested, and it seems to save the frame images correctly. Thank you.
Regarding KEY_MAX_INPUT_SIZE, I tried not setting, or setting it to 0, 20, 200, ... 200000000, all with the same result of info.size=0.
I am now unable to set the render to a SurfaceView or TextureView on my layout. I tried replacing this line:
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
with this, where surfaceTexture is a SurfaceTexture defined in my xml-layout:
mSurfaceTexture = textureView.getSurfaceTexture();
mSurfaceTexture.attachToGLContext(mTextureRender.getTextureId());
but it throws a weird error with getMessage()==null on the second line. I couldn't find any other way to get it to draw on a View of some kind. How can I change the decoder to display the frames on a Surface/SurfaceView/TextureView instead of saving them?

The way SurfaceTexture works makes this a bit tricky to get right.
The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):
if (this thread has a looper) {
handle events on this thread
} else if (there's a "main" looper) {
handle events on the main UI thread
} else {
no events for you
}
The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.
So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.
Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.
If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.
If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)

Related

Android MediaCodec Encode and Decode In Asynchronous Mode

I am trying to decode a video from a file and encode it into a different format with MediaCodec in the new Asynchronous Mode supported in API Level 21 and up (Android OS 5.0 Lollipop).
There are many examples for doing this in Synchronous Mode on sites such as Big Flake, Google's Grafika, and dozens of answers on StackOverflow, but none of them support Asynchronous mode.
I do not need to display the video during the process.
I believe that the general procedure is to read the file with a MediaExtractor as the input to a MediaCodec(decoder), allow the output of the Decoder to render into a Surface that is also the shared input into a MediaCodec(encoder), and then finally to write the Encoder output file via a MediaMuxer. The Surface is created during setup of the Encoder and shared with the Decoder.
I can Decode the video into a TextureView, but sharing the Surface with the Encoder instead of the screen has not been successful.
I setup MediaCodec.Callback()s for both of my codecs. I believe that an issues is that I do not know what to do in the Encoder's callback's onInputBufferAvailable() function. I do not what to (or know how to) copy data from the Surface into the Encoder - that should happen automatically (as is done on the Decoder output with codec.releaseOutputBuffer(outputBufferId, true);). Yet, I believe that onInputBufferAvailable requires a call to codec.queueInputBuffer in order to function. I just don't know how to set the parameters without getting data from something like a MediaExtractor as used on the Decode side.
If you have an Example that opens up a video file, decodes it, encodes it to a different resolution or format using the asynchronous MediaCodec callbacks, and then saves it as a file, please share your sample code.
=== EDIT ===
Here is a working example in synchronous mode of what I am trying to do in asynchronous mode: ExtractDecodeEditEncodeMuxTest.java: https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java This example is working in my application
I believe you shouldn't need to do anything in the encoder's onInputBufferAvailable() callback - you should not call encoder.queueInputBuffer(). Just as you never call encoder.dequeueInputBuffer() and encoder.queueInputBuffer() manually when doing Surface input encoding in synchronous mode, you shouldn't do it in asynchronous mode either.
When you call decoder.releaseOutputBuffer(outputBufferId, true); (in both synchronous and asynchronous mode), this internally (using the Surface you provided) dequeues an input buffer from the surface, renders the output into it, and enqueues it back to the surface (to the encoder). The only difference between synchronous and asynchronous mode is in how the buffer events are exposed in the public API, but when using Surface input, it uses a different (internal) API to access the same, so synchronous vs asynchronous mode shouldn't matter for this at all.
So as far as I know (although I haven't tried it myself), you should just leave the onInputBufferAvailable() callback empty for the encoder.
EDIT:
So, I tried doing this myself, and it's (almost) as simple as described above.
If the encoder input surface is configured directly as output to the decoder (with no SurfaceTexture inbetween), things just work, with a synchronous decode-encode loop converted into an asynchronous one.
If you use SurfaceTexture, however, you may run into a small gotcha. There is an issue with how one waits for frames to arrive to the SurfaceTexture in relation to the calling thread, see https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecodeEditEncodeTest.java#106 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#104 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#113 for references to this.
The issue, as far as I see it, is in awaitNewImage as in https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#240. If the onFrameAvailable callback is supposed to be called on the main thread, we have an issue if the awaitNewImage call also is run on the main thread. If the onOutputBufferAvailable callbacks also are called on the main thread and you call awaitNewImage from there, we have an issue, since you'll end up waiting for a callback (with a wait() that blocks the whole thread) that can't be run until the current method returns.
So we need to make sure that the onFrameAvailable callbacks come on a different thread than the one that calls awaitNewImage. One pretty simple way of doing this is to create a new separate thread, that does nothing but service the onFrameAvailable callbacks. To do that, you can do e.g. this:
private HandlerThread mHandlerThread = new HandlerThread("CallbackThread");
private Handler mHandler;
...
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
...
mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);
I hope this is enough for you to be able to solve your issue, let me know if you need me to edit one of the public examples to implement asynchronous callbacks there.
EDIT2:
Also, since the GL rendering might be done from within the onOutputBufferAvailable callback, this might be a different thread than the one that set up the EGL context. So in that case, one needs to release the EGL context in the thread that set it up, like this:
mEGL.eglMakeCurrent(mEGLDisplay, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT);
And reattach it in the other thread before rendering:
mEGL.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
EDIT3:
Additionally, if the encoder and decoder callbacks are received on the same thread, the decoder onOutputBufferAvailable that does rendering can block the encoder callbacks from being delivered. If they aren't delivered, the rendering can be blocked infinitely since the encoder don't get the output buffers returned. This can be fixed by making sure the video decoder callbacks are received on a different thread instead, and this avoids the issue with the onFrameAvailable callback instead.
I tried implementing all this on top of ExtractDecodeEditEncodeMuxTest, and got it working seemingly fine, have a look at https://github.com/mstorsjo/android-decodeencodetest. I initially imported the unchanged test, and did the conversion to asynchronous mode and fixes for the tricky details separately, to make it easy to look at the individual fixes in the commit log.
Can also set the Handler in the MediaEncoder.
---> AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
MyAudioCodecWrapper myMediaCodecWrapper;
public MyAudioEncoder(long startRecordWhenNs){
super.startRecordWhenNs = startRecordWhenNs;
}
#RequiresApi(api = Build.VERSION_CODES.M)
public MyAudioCodecWrapper prepareAudioEncoder(AudioRecord _audioRecord , int aacSamplePreFrameSize) throws Exception{
if(_audioRecord==null || aacSamplePreFrameSize<=0)
throw new Exception();
audioRecord = _audioRecord;
Log.d(TAG, "audioRecord:" + audioRecord.getAudioFormat() + ",aacSamplePreFrameSize:" + aacSamplePreFrameSize);
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
MediaFormat audioFormat = new MediaFormat();
audioFormat.setString(MediaFormat.KEY_MIME, MIMETYPE_AUDIO_AAC);
//audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE );
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_SAMPLE_RATE, audioRecord.getSampleRate());//44100
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, audioRecord.getChannelCount());//1(單身道)
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 128000);
audioFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 16384);
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE_AUDIO_AAC);
codec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.setCallback(new AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
//codec.start();
MyAudioCodecWrapper myMediaCodecWrapper = new MyAudioCodecWrapper();
myMediaCodecWrapper.mediaCodec = codec;
super.mediaCodec = codec;
return myMediaCodecWrapper;
}

Low performance when execute eglSwapBuffer and eglMakeCurrent

I'm developing an Android Unity Plugin that allows user to record his/her gameplay
Overview of my solution:
Using OpenGl FrameBufferObject (FBO) to make Unity render offscreen to this FBO
Get the offscreen texture of this FBO then using for 2 purposes:
Render to video surface
Redraw to device screen
Execute flow per frame:
bind my FBO
render scene to FBO (Unity code)
unbind my FBO
set up video surface
configure surface size (execute first time only)
save egl state
make video surface current
draw to video surface using offscreen texture of my FBO
restore to default surface
set presentation time to video frame
swap buffer from video surface to default window
restore egl state
make default surface current
notify encoder thread that data is ready to write
My issue is performance while recording is not good. FPS downs from 60 to 40 on Samsung Galaxy S4. I tried to record execute time of render operations and recognize that the most affect performance operations are make video surface current operation and swap buffer from video surface to default window operation. Below is their code
public void makeCurrent()
{
if (!EGL14.eglMakeCurrent(this.mEGLDisplay, this.mEGLSurface, this.mEGLSurface, this.mEGLContext))
throw new RuntimeException("eglMakeCurrent failed");
}
public boolean swapBuffers()
{
return EGL14.eglSwapBuffers(this.mEGLDisplay, this.mEGLSurface);
}
Execute time of make current operation is 1 ~ 18 ms
Execute time of swap buffers operation is 4 ~ 14 ms
Execute time of other operations is usually 0 ~ 1 ms
How to improve performance of these operations?
Any help will be greatly appreciated!
A lot of OpenGL calls are assync, and some calls may cause the OpenGL wait the queued operations to execute. So the times you are seen are because of the other calls that execute before the actual call you are doing.

Strange performance of avcodec_decode_video2

I am developing an Android video player. I use ffmpeg in native code to decode video frame. In the native code, I have a thread called decode_thread that calls avcodec_decode_video2()
int decode_thread(void *arg) {
avcodec_decode_video2(codecCtx, pFrame, &frameFinished,pkt);
}
I have another thread called display_thread that uses aNativeWindow to display a decoded frame on a SurfaceView.
The problem is that if I let the decode_thread run continuously without a delay. It significantly reduces the performance of avcodec_decode_video2(). Sometimes it takes about 0.1 seconds to decode a frame. However if I put a delay on the decode_thread. Something likes this.
int decode_thread(void *arg) {
avcodec_decode_video2(codecCtx, pFrame, &frameFinished,pkt);
usleep(20*1000);
}
The performance of avcodec_decode_video2() is really good, about 0.001 seconds. However putting a delay on the decode_thread is not a good solution because it affects the playback. Could anyone explain the behavior of avcodec_decode_video2() and suggest me a solution?
It looks impossible that the performance of video decoding function would improve just because your thread sleeps. Most likely the video decoding thread gets preempted by another thread, and hence you get the increased timing (hence your thread did not work). When you add a call to usleep, this does the context switch to another thread. So when your decoding thread is scheduled again the next time, it starts with the full CPU slice, and is not interrupted in the decode_ video2 function anymore.
What should you do? You surely want to decode packets a little bit ahead than you show them - the performance of avcodec_decode_video2 certainly isn't constant, and if you try to stay just one frame ahead, you might not have enough time to decode one of the frames.
I'd create a producer-consumer queue with the decoded frames, with the top limit. The decoder thread is a producer, and it should run until it fills up the queue, and then it should wait until there's room for another frame. The display thread is a consumer, it would take frames from this queue and display them.

Android MediaCodec dequeOutputBuffer always returns -1

I am using the new MediaCodec API on Jelly Bean to decode an h264 stream.
Using the code snippets in the developer page , instantiated a decoder by name (taken from media_codec.xml), passed a surface and configured the codec.
The problem I am facing is, dequeOutputBuffer always returns -1.
Tried with a negative timeout to wait indefenitely, no luck with that.
Whenever I get a -1, refreshed the buffers using getOutputBuffers.
Please note that the same issue is seen when a custom app is used to parse the data from a media source and provide to decoder.
Any inputs on the above will be helpful
I had faced same problem. Incrementing presentationTimeUs parameter of queueInputBuffer() on each call solved the issue.
For example,
codec.queueInputBuffer(inputBufferIndex, 0, data.size, time, 0)
time += 66 //incrementing by 1 works too
If anyone else is facing this problem (as I did today) while starting with MediaCodec make sure to release the output codecs after you're done with them:
mediaCodec.releaseOutputBuffer(index, render);
or else the codec will run out of available buffers pretty soon.
It may be necessary to feed several input buffers before obtaining data in output buffer.
-1 is INFO_TRY_AGAIN_LATER, meaning the output buffer queue is still being prepared and you just need to call dequeueOutputBuffer again.
Try using a work loop that calls dequeueOutputBuffer in a loop similar to ExoPlayer:
while (drainOutputBuffer(positionUs, elapsedRealtimeUs)) {}
if (feedInputBuffer(true)) {
while (feedInputBuffer(false)) {}
}
where drainOutputBuffer is a method that calls dequeueOutputBuffer.

AudioTrack restarting even after it is stopped

I created a simple application that generates a square wave of given frequency and plays it using AudioTrack in STREAM mode (STREAM_MUSIC). Everything seems to be working fine and the sound plays okay, however when the stream is finished I get messages in the log:
W/AudioTrack( 7579): obtainBuffer() track 0x14c228 disabled, restarting ...
Even after calling the stop() function I still get these.
I believe I properly set the AudioTrack buffer size, based on minimal size required by AudioTrack (in my case 6x1024). I feed it with smaller buffers of 1024 shorts.
Is it okay that I'm getting these and should I leave it like that?
Ok, I think the problem is solved. The error is generated when the buffer is not completely filled with data on time (buffer underrun) . I have no idea what the timeout is but if you experience this make sure that:
You don't call the play method until you have some data in the buffer.
You can generate the data fast enough to beat the timeout.
After you are finished feeding the buffer with data, before you call stop() method, make sure that the "last" buffer was completely filled with data before timeout.
I dealt with the last issue by always waiting a little (until timeout) then sending 1 buffer full of zeroes and finally calling the stop() function.
Keep in mind that you must always send the buffer in smaller chunks, even if you have the big chunk ready. It still bothers me a bit that I'm not 100% sure if that is the right way but the errors are gone so I guess I can live with that :)
I've found that even when the buffer is technically long enough, and filled with bytes, if they aren't properly formatted (audio shorts converted to a byte array) it will still throw you that error.
I was getting that warning when I instantiated the Audiotrack, called audioTrack.play() and there was a slight delay between the play() call and the audioTrack.write(). If I called play() right before write() the warning disappeared.
I've solved by this
if (mAudioTrack.getPlayState()!=AudioTrack.PLAYSTATE_PLAYING)
mAudioTrack.play();
mAudioTrack.write(b, 0, sz * 2);
mAudioTrack.stop();
mAudioTrack.flush();

Categories

Resources