I'm working on a video processing app. The app has one Activity that contains a Fragment. The Fragment in turn contains a VideoSurfaceView derived from GLSurfaceView for me to show the preview of the video with effect (using OpenGL) to users. After previewing, users can start processing the video.
To process the video, I mainly apply the method described in here.
Everything works fine on most devices, but the Oppo Mirror 3 (Android 4.4). On this device, everytime I try to create an Surface using MediaCodec.createInputSurface(), it throws out java.lang.IllegalStateException with code -38.
E/OMXMaster: A component of name 'OMX.qcom.audio.decoder.aac' already exists, ignoring this one.
E/SoftAVCEncoder: internalSetParameter: StoreMetadataInBuffersParams.nPortIndex not zero!
E/OMXNodeInstance: OMX_SetParameter() failed for StoreMetaDataInBuffers: 0x80001001
E/ACodec: [OMX.google.h264.encoder] storeMetaDataInBuffers (output) failed w/ err -2147483648
E/OMXNodeInstance: createInputSurface requires COLOR_FormatSurface (AndroidOpaque) color format
E/ACodec: [OMX.google.h264.encoder] onCreateInputSurface returning error -38
E/VideoProcessing: java.lang.IllegalStateException
at android.media.MediaCodec.createInputSurface(Native Method)
at com.ltpquang.android.core.processing.codec.VideoEncoder.<init>(VideoEncoder.java:46)
at com.ltpquang.android.core.VideoProcessing.setupVideo(VideoProcessing.java:200)
at com.ltpquang.android.core.VideoProcessing.<init>(VideoProcessing.java:167)
at com.ltpquang.android.ui.activity.PreviewEditActivity.lambda$btNext$12(PreviewEditActivity.java:723)
at com.ltpquang.android.ui.activity.PreviewEditActivity.access$lambda$12(PreviewEditActivity.java)
at com.ltpquang.android.ui.activity.PreviewEditActivity$$Lambda$13.run(Unknown Source)
at java.lang.Thread.run(Thread.java:841)
Playing around a little bit, I observed that:
BEFORE creating and adding the VideoSurfaceView to the layout, I can create MediaCodec encoder and obtain the input surface successfully. And I can create as many as I want if I release the previous MediaCodec before creating a new one, otherwise I can only obtain one and only one input surface regardless how many MediaCodec I have.
AFTER creating and adding the VideoSurfaceView to the layout, there is no chance that I can get the input surface from the MediaCodec, it thows java.lang.IllegalStateException always.
I've tried removing the VideoSurfaceView from the layout, set it to null, before creating the surface, but no luck for me.
I also tried with suggestions from here, or here, but they didn't help.
From this, it seems that my device can only get the software codec. So that I cant create the input surface.
My question is:
Why was that?
If the device's resources is limited, what can I do (release something for example) to continue the process?
If it is related to the software codec, what should I do? How can I detect and release the resource?
Is this related to GL contexts? If yes, what should I do? Should I manage the contexts my self?
Related
The app I am working on gets the video from the camera through Surface and encodes it to video/avc (H264) I am doing that successfully and it is working great on phones like galaxy Note 10+ but on phones like Xiaomi note 10s which is a new phone I am having this issue. Here is what I am doing:
create format:
format = MediaFormat.createVideoFormat(
H264, videoWidth, videoHeight
).apply {
setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0)
setInteger(MediaFormat.KEY_BIT_RATE, bitrate)
setInteger(MediaFormat.KEY_FRAME_RATE, videoFrameRate)
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
)
setFloat(MediaFormat.KEY_I_FRAME_INTERVAL, 1f)
}```
Then create encoderName:
val encoderName = MediaCodecList(
MediaCodecList.ALL_CODECS
).findEncoderForFormat(format) //using the format I shared in the first step
Then create:
codec = MediaCodec.createByCodecName(encoderName)
Then .setCallback(callback) //not important since we won't make it till this point, it will crash before that.
4. And this is the line where it crashes.
codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE) //CRASH => MediaCodec$CodecException: Error 0x80001001
The rest
codec.setInputSurface(surface)
codec.start()
I am suspecting the
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
) //I tried changing the value and completely removing this setInteger, no luck :/
Error 0x80001001 also known as OMX_ErrorUndefined says:
"There was an error, but the cause of the error could not be determined".
Most likely cause for this error is insufficient resources. This can happen for example if you try to configure a hardware codec but there is not enough graphics memory available at the moment.
Suggestion 1: Make sure you release the codecs when you are done using them. You need to check all code paths.
Suggestion 2: Knowing that this can happen, you can filter the MediaCodecList keeping all the encoders that support the given format. Then wrap the configure() call in a try/catch block. And, if the call fails, try the next option from the list of codecs.
Note that on most devices there are at least two codecs for H264: a hardware codec and a software codec. The former one having better performance, the latter one being more resilient.
I try to extract image directly from the GPU (so without using AcquireCameraImageBytes()) for performance reason (Samsung S9 can't reach 10fps) and to support the Xiaomi Pocophone I bought. I use the class TextureReader, included inside the ComputerVision example, but OnImageAvailableCallback is never called and log show some error during initialization:
camera_utility: Failed to create OpenGL frame buffer.
camera_utility: Failed to create OpenGL frame buffer.
I insert a couple of breakpoints inside libarcore_camera_utility.so. And I see that glCheckFramebufferStatus inside TextureReader::create return 0 (so a error occurred), but glGetError() don't return any error. How it's possible resolve the problem?
When I call getCurrentMillis in android media player it throws IllegalStateException and I see the following in the logs:
E/MediaPlayer: error (1, -12)
I have checked that the video is being correctly prepared.
It also causes OnErrorListener.onError to be called with args 1 and -12.
In the end I found that this was caused by trying to play a video which was only 48px wide. Since I had cropped the video in ffmpeg from a larger one I was able to use a bigger crop. When I increased the width to 64px, the error no longer occurred.
Maybe media player has an undocumented minimum allowed size or maybe the narrower crop was in violation of the H264 spec or something. Hope this helps someone.
I'm processing a live stream via MediaCodec and have a scenario where the MediaFormat changes mid-stream (ie: resolution of the video being decoded changes). Given I'm attaching the decoder to a Surface to render it as soon as I detect the change in resolution on the incoming stream I recreate the decoder before feeding it the new resolution buffer (providing it with the proper new MediaFormat).
I've been getting some weird errors which don't give me too much info as to what could be wrong, ie when calling MediaCodec.configure with the new format and same Surface:
android.media.MediaCodec$CodecException: Error 0xffffffea
at android.media.MediaCodec.native_configure(Native Method)
at android.media.MediaCodec.configure(MediaCodec.java:577)
Which when fetching the CodecException.getDiagnosticInfo it shows nothing that I can really use to understand the reason for the failure: android.media.MediaCodec.error_neg_22
I've also noted the following on the logs and found some related information and am wondering if there's something I need to do regarding the Surface itself (like detaching it from the old instance of the decoder being giving it to the new one):
07-09 15:00:17.217 E/BufferQueueProducer( 139): [SurfaceView] connect(P): already connected (cur=3 req=3)
07-09 15:00:17.217 E/MediaCodec( 5388): native_window_api_connect returned an error: Invalid argument (-22)
07-09 15:00:17.218 E/MediaCodec( 5388): configure failed with err 0xffffffea, resetting...
Looks like calling stop() and release() as well as reinitializing any references I had to the getInputBuffers() and getOutputBuffers() did the trick. At least I don't get the messages/exceptions anymore. Now I just need to figure out the Surface reference part as it seems the resized stream (when resolution changes) is still being fit in the original surface dimensions instead of adjusting the Surface for the new resolution.
If your encoder supports adaptive playback, then apparently you can alter some codec paramaters on the fly:
https://stackoverflow.com/a/34427724/1048170
I'm trying to get the following MediaExtractor example to work:
http://bigflake.com/mediacodec/ - ExtractMpegFramesTest.java (requires 4.1, API 16)
The problem I have is that outputSurface.awaitNewImage(); seems to always throw RuntimeException("frame wait timed out"), which is thrown whenever the mFrameSyncObject.wait(TIMEOUT_MS) call times out. No matter what I set TIMEOUT_MS to be, onFrameAvailable() always gets called right after the timeout occurs. I tried with 50ms and with 30000ms and it's the same.
It seems like the onFrameAvailable() call can't be done while the thread is busy, and once the timeout happens which ends the thread code execution, it can parse the onFrameAvailable() call.
Has anyone managed to get this example to work, or knows how MediaExtractor is supposed to work with GL textures?
Edit: tried this on devices with API 4.4 and 4.1.1 and the same happens on both.
Edit 2:
Got it working on 4.4 thanks to fadden. The issue was that the ExtractMpegFramesWrapper.runTest() method called th.join(); which blocked the main thread and prevented the onFrameAvailable() call from being processed. Once I commented th.join(); it works on 4.4. I guess maybe the ExtractMpegFramesWrapper.runTest() itself was supposed to run on yet another thread so the main thread didn't get blocked.
There was also a small issue on 4.1.2 when calling codec.configure(), it gave the error:
A/ACodec(2566): frameworks/av/media/libstagefright/ACodec.cpp:1041 CHECK(def.nBufferSize >= size) failed.
A/libc(2566): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 2625 (CodecLooper)
Which I solved by adding the following before the call:
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
However the problem I have now on both 4.1.1 (Galaxy S2 GT-I9100) and 4.1.2 (Samsung Galaxy Tab GT-P3110) is that they both always set info.size to 0 for all frames. Here is the log output:
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
submitted frame 0 to dec, size=20562
no output from decoder available
loop
submitted frame 1 to dec, size=7193
no output from decoder available
loop
[... skipped 18 lines ...]
submitted frame 8 to dec, size=6531
no output from decoder available
loop
submitted frame 9 to dec, size=5639
decoder output format changed: {height=240, what=1869968451, color-format=19, slice-height=240, crop-left=0, width=320, crop-bottom=239, crop-top=0, mime=video/raw, stride=320, crop-right=319}
loop
submitted frame 10 to dec, size=6272
surface decoder given buffer 0 (size=0)
loop
[... skipped 1211 lines ...]
submitted frame 409 to dec, size=456
surface decoder given buffer 1 (size=0)
loop
sent input EOS
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
[... skipped 27 lines all with size=0 ...]
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
output EOS
Saving 0 frames took ? us per frame // edited to avoid division-by-zero error
So no images get saved. However the same code and video works on 4.3. The video I am using is an .mp4 file with "H264 - MPEG-4 AVC (avc1)" video codec and "MPEG AAAC Audio (mp4a)" audio codec.
I also tried other video formats, but they seem to die even sooner on 4.1.x, while both work on 4.3.
Edit 3:
I did as you suggested, and it seems to save the frame images correctly. Thank you.
Regarding KEY_MAX_INPUT_SIZE, I tried not setting, or setting it to 0, 20, 200, ... 200000000, all with the same result of info.size=0.
I am now unable to set the render to a SurfaceView or TextureView on my layout. I tried replacing this line:
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
with this, where surfaceTexture is a SurfaceTexture defined in my xml-layout:
mSurfaceTexture = textureView.getSurfaceTexture();
mSurfaceTexture.attachToGLContext(mTextureRender.getTextureId());
but it throws a weird error with getMessage()==null on the second line. I couldn't find any other way to get it to draw on a View of some kind. How can I change the decoder to display the frames on a Surface/SurfaceView/TextureView instead of saving them?
The way SurfaceTexture works makes this a bit tricky to get right.
The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):
if (this thread has a looper) {
handle events on this thread
} else if (there's a "main" looper) {
handle events on the main UI thread
} else {
no events for you
}
The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.
So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.
Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.
If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.
If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)