I'm processing a live stream via MediaCodec and have a scenario where the MediaFormat changes mid-stream (ie: resolution of the video being decoded changes). Given I'm attaching the decoder to a Surface to render it as soon as I detect the change in resolution on the incoming stream I recreate the decoder before feeding it the new resolution buffer (providing it with the proper new MediaFormat).
I've been getting some weird errors which don't give me too much info as to what could be wrong, ie when calling MediaCodec.configure with the new format and same Surface:
android.media.MediaCodec$CodecException: Error 0xffffffea
at android.media.MediaCodec.native_configure(Native Method)
at android.media.MediaCodec.configure(MediaCodec.java:577)
Which when fetching the CodecException.getDiagnosticInfo it shows nothing that I can really use to understand the reason for the failure: android.media.MediaCodec.error_neg_22
I've also noted the following on the logs and found some related information and am wondering if there's something I need to do regarding the Surface itself (like detaching it from the old instance of the decoder being giving it to the new one):
07-09 15:00:17.217 E/BufferQueueProducer( 139): [SurfaceView] connect(P): already connected (cur=3 req=3)
07-09 15:00:17.217 E/MediaCodec( 5388): native_window_api_connect returned an error: Invalid argument (-22)
07-09 15:00:17.218 E/MediaCodec( 5388): configure failed with err 0xffffffea, resetting...
Looks like calling stop() and release() as well as reinitializing any references I had to the getInputBuffers() and getOutputBuffers() did the trick. At least I don't get the messages/exceptions anymore. Now I just need to figure out the Surface reference part as it seems the resized stream (when resolution changes) is still being fit in the original surface dimensions instead of adjusting the Surface for the new resolution.
If your encoder supports adaptive playback, then apparently you can alter some codec paramaters on the fly:
https://stackoverflow.com/a/34427724/1048170
Related
I'm trying to get the video stream of a camera into an imagereader surface to be able to process these images. I found a lot of examples that deal with the camera2 API but I don't use that because my video stream comes from an external camera.
Ideally, I would have two surfaces: one as a preview and and one from the ImageReader to process the image. Similar to this. I understand that you combine the two surfaces with a CaptureRequest.Builder and then .addTarget(surface). The problem is that I don't have a CamerDevice to make the createCaptureRequest.
The code I am using can be found here.
I tried to just create an ImageReader and its surface and pass it to the startDecoding function. But this didn't work quite well as I got this error:
E/JNI: close+++++++
E/BufferQueueProducer: [ImageReader-1280x720f32315659m16-17834-0] dequeueBuffer: BufferQueue has been abandoned
E/ACodec: NATIVE_WINDOW_MIN_UNDEQUEUED_BUFFERS query failed: No such device (19)
E/ACodec: Failed to allocate output port buffers after port reconfiguration: (-19)
E/ACodec: signalError(omxError 0x80001001, internalError -19)
E/MediaCodec: Codec reported err 0xffffffed, actionCode 0, while in state 6
E/AccessHeadCameraActivity: Error has occured.
java.lang.IllegalStateException
at android.media.MediaCodec.native_dequeueOutputBuffer(Native Method)
at android.media.MediaCodec.dequeueOutputBuffer(MediaCodec.java:2379)
Any hint into the right direction would be nice!
Update 1:
The error results from the return value of dequeueOutputBuffer, as this has the value of -1. According to the docs on the MediaCodec, this means that the call timed out. But why does that happen?
Update 2
I don't have the surfaceCreated (because I don't have the SurfaceView anymore), so that code moved into the onCreate. Everthing else is pretty much the same as in here
#Override
public void onCreate(Bundle savedInstanceState) {
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_acess_headcamera);
mediaManager = (MediaManager) getUnitManager(FuncConstant.MEDIA_MANAGER);
setupImageReader();
StreamOption streamOption = new StreamOption();
streamOption.setChannel(StreamOption.MAIN_STREAM);
streamOption.setDecodType(StreamOption.HARDWARE_DECODE);
streamOption.setJustIframe(false);
mediaManager.openStream(streamOption);
surface = imageReader.getSurface();
startDecoding(surface);
initListener();
}
private void setupImageReader() {
imageReader = ImageReader.newInstance(width, height, ImageFormat.YV12,
IMAGE_READER_BUFFER_SIZE);
imageReader.setOnImageAvailableListener(onImageAvailableListener, backgroundHandler);
}
I'm trying to write a simple video encoder that uses the Android platform's MediaCodec class in "surface input" mode.
These are the steps I'm following (supporting code left out for the sake of brevity):
mediaCodec = MediaCodec::CreateByType(looper, "video/avc", true);
mediaCodec->configure(config, NULL, NULL, CONFIGURE_FLAG_ENCODE);
mediaCodec->createInputSurface(&inputSurface);
mediaCodec->start();
Following this, I'm trying to dequeue a buffer from the created input surface (which is an IGraphiBufferProducer interface object), but it fails with the NO_INIT error:
inputSurface->dequeueBuffer(&slot, &fence, w, h, format, 0);
The error message in the ADB log is:
BufferQueueProducer: [GraphicBufferSource] dequeueBuffer: BufferQueue has no connected producer
Any idea why the buffer queue has no connected producer? I would assume that the MediaCodec class would handle the creation of the buffer queue as well as the connection of the producer and consumers to the queue.
I'm using Android API level 26 (7.1.2). I'm using the platform-level libs because my use case requires access to GraphicBuffer objects.
Thanks in advance!
EDIT: The general idea is to:
Dequeue buffers from the input surface & fill them.
Queue the filled buffers back to the input surface (which would presumably trigger the media codec (video encoder) instance that the surface belongs
to).
Dequeue output buffers (containing raw H.264 bitstream data) from the media codec instance, and write it to file.
Release output buffers back to the media codec instance.
From IGraphiBufferProducer documentation:
// * NO_INIT - the buffer queue has been abandoned or the producer is not
// connected.
I guess that the part that is missing in your code is this "connect".
IGraphiBufferProducer has such a method, are you using it?
I'm working on a video processing app. The app has one Activity that contains a Fragment. The Fragment in turn contains a VideoSurfaceView derived from GLSurfaceView for me to show the preview of the video with effect (using OpenGL) to users. After previewing, users can start processing the video.
To process the video, I mainly apply the method described in here.
Everything works fine on most devices, but the Oppo Mirror 3 (Android 4.4). On this device, everytime I try to create an Surface using MediaCodec.createInputSurface(), it throws out java.lang.IllegalStateException with code -38.
E/OMXMaster: A component of name 'OMX.qcom.audio.decoder.aac' already exists, ignoring this one.
E/SoftAVCEncoder: internalSetParameter: StoreMetadataInBuffersParams.nPortIndex not zero!
E/OMXNodeInstance: OMX_SetParameter() failed for StoreMetaDataInBuffers: 0x80001001
E/ACodec: [OMX.google.h264.encoder] storeMetaDataInBuffers (output) failed w/ err -2147483648
E/OMXNodeInstance: createInputSurface requires COLOR_FormatSurface (AndroidOpaque) color format
E/ACodec: [OMX.google.h264.encoder] onCreateInputSurface returning error -38
E/VideoProcessing: java.lang.IllegalStateException
at android.media.MediaCodec.createInputSurface(Native Method)
at com.ltpquang.android.core.processing.codec.VideoEncoder.<init>(VideoEncoder.java:46)
at com.ltpquang.android.core.VideoProcessing.setupVideo(VideoProcessing.java:200)
at com.ltpquang.android.core.VideoProcessing.<init>(VideoProcessing.java:167)
at com.ltpquang.android.ui.activity.PreviewEditActivity.lambda$btNext$12(PreviewEditActivity.java:723)
at com.ltpquang.android.ui.activity.PreviewEditActivity.access$lambda$12(PreviewEditActivity.java)
at com.ltpquang.android.ui.activity.PreviewEditActivity$$Lambda$13.run(Unknown Source)
at java.lang.Thread.run(Thread.java:841)
Playing around a little bit, I observed that:
BEFORE creating and adding the VideoSurfaceView to the layout, I can create MediaCodec encoder and obtain the input surface successfully. And I can create as many as I want if I release the previous MediaCodec before creating a new one, otherwise I can only obtain one and only one input surface regardless how many MediaCodec I have.
AFTER creating and adding the VideoSurfaceView to the layout, there is no chance that I can get the input surface from the MediaCodec, it thows java.lang.IllegalStateException always.
I've tried removing the VideoSurfaceView from the layout, set it to null, before creating the surface, but no luck for me.
I also tried with suggestions from here, or here, but they didn't help.
From this, it seems that my device can only get the software codec. So that I cant create the input surface.
My question is:
Why was that?
If the device's resources is limited, what can I do (release something for example) to continue the process?
If it is related to the software codec, what should I do? How can I detect and release the resource?
Is this related to GL contexts? If yes, what should I do? Should I manage the contexts my self?
When I call getCurrentMillis in android media player it throws IllegalStateException and I see the following in the logs:
E/MediaPlayer: error (1, -12)
I have checked that the video is being correctly prepared.
It also causes OnErrorListener.onError to be called with args 1 and -12.
In the end I found that this was caused by trying to play a video which was only 48px wide. Since I had cropped the video in ffmpeg from a larger one I was able to use a bigger crop. When I increased the width to 64px, the error no longer occurred.
Maybe media player has an undocumented minimum allowed size or maybe the narrower crop was in violation of the H264 spec or something. Hope this helps someone.
I'm trying to get the following MediaExtractor example to work:
http://bigflake.com/mediacodec/ - ExtractMpegFramesTest.java (requires 4.1, API 16)
The problem I have is that outputSurface.awaitNewImage(); seems to always throw RuntimeException("frame wait timed out"), which is thrown whenever the mFrameSyncObject.wait(TIMEOUT_MS) call times out. No matter what I set TIMEOUT_MS to be, onFrameAvailable() always gets called right after the timeout occurs. I tried with 50ms and with 30000ms and it's the same.
It seems like the onFrameAvailable() call can't be done while the thread is busy, and once the timeout happens which ends the thread code execution, it can parse the onFrameAvailable() call.
Has anyone managed to get this example to work, or knows how MediaExtractor is supposed to work with GL textures?
Edit: tried this on devices with API 4.4 and 4.1.1 and the same happens on both.
Edit 2:
Got it working on 4.4 thanks to fadden. The issue was that the ExtractMpegFramesWrapper.runTest() method called th.join(); which blocked the main thread and prevented the onFrameAvailable() call from being processed. Once I commented th.join(); it works on 4.4. I guess maybe the ExtractMpegFramesWrapper.runTest() itself was supposed to run on yet another thread so the main thread didn't get blocked.
There was also a small issue on 4.1.2 when calling codec.configure(), it gave the error:
A/ACodec(2566): frameworks/av/media/libstagefright/ACodec.cpp:1041 CHECK(def.nBufferSize >= size) failed.
A/libc(2566): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 2625 (CodecLooper)
Which I solved by adding the following before the call:
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
However the problem I have now on both 4.1.1 (Galaxy S2 GT-I9100) and 4.1.2 (Samsung Galaxy Tab GT-P3110) is that they both always set info.size to 0 for all frames. Here is the log output:
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
submitted frame 0 to dec, size=20562
no output from decoder available
loop
submitted frame 1 to dec, size=7193
no output from decoder available
loop
[... skipped 18 lines ...]
submitted frame 8 to dec, size=6531
no output from decoder available
loop
submitted frame 9 to dec, size=5639
decoder output format changed: {height=240, what=1869968451, color-format=19, slice-height=240, crop-left=0, width=320, crop-bottom=239, crop-top=0, mime=video/raw, stride=320, crop-right=319}
loop
submitted frame 10 to dec, size=6272
surface decoder given buffer 0 (size=0)
loop
[... skipped 1211 lines ...]
submitted frame 409 to dec, size=456
surface decoder given buffer 1 (size=0)
loop
sent input EOS
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
[... skipped 27 lines all with size=0 ...]
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
output EOS
Saving 0 frames took ? us per frame // edited to avoid division-by-zero error
So no images get saved. However the same code and video works on 4.3. The video I am using is an .mp4 file with "H264 - MPEG-4 AVC (avc1)" video codec and "MPEG AAAC Audio (mp4a)" audio codec.
I also tried other video formats, but they seem to die even sooner on 4.1.x, while both work on 4.3.
Edit 3:
I did as you suggested, and it seems to save the frame images correctly. Thank you.
Regarding KEY_MAX_INPUT_SIZE, I tried not setting, or setting it to 0, 20, 200, ... 200000000, all with the same result of info.size=0.
I am now unable to set the render to a SurfaceView or TextureView on my layout. I tried replacing this line:
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
with this, where surfaceTexture is a SurfaceTexture defined in my xml-layout:
mSurfaceTexture = textureView.getSurfaceTexture();
mSurfaceTexture.attachToGLContext(mTextureRender.getTextureId());
but it throws a weird error with getMessage()==null on the second line. I couldn't find any other way to get it to draw on a View of some kind. How can I change the decoder to display the frames on a Surface/SurfaceView/TextureView instead of saving them?
The way SurfaceTexture works makes this a bit tricky to get right.
The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):
if (this thread has a looper) {
handle events on this thread
} else if (there's a "main" looper) {
handle events on the main UI thread
} else {
no events for you
}
The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.
So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.
Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.
If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.
If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)