Android MediaCodec change resolution - android

I am passing the output of a MediaExtractor into a MediaCodec decoder, and then passing the decoder's output buffer to an encoder's input buffer. The problem I have is that I need to reduce the resolution from the 1920x1080 output from the decoder to 1280x720 by the time it comes out of the encoder. I can do this using a Surface, but I am trying to target Android 4.1 so will need to achieve this another way. Does anyone know how to change the resolution of a video file using MediaCodec but in a way that is compatible with 4.1?

You can use libswscale from libav/ffmpeg, or libyuv, or any other YUV handling library, or write your own downscaling routine - it's not very hard actually.
Basically, when you feed the output from the decoder output buffer into the encoder input buffer, you already can't assume you can do a plain copy, because the two may use different color formats. So to be flexible, your code for copying data already needs to be able to convert any supported decoder output color format into any supported encoder input color format. In this copy step, you can just scale down the data. A trivial nearest neighbor downscale is very simple to implement; better looking scaling require a bit more work.
You don't need to do a full SW decode/encode, you can just use SW to adjust the data in the intermediate copy step. But as fadden pointed out, MediaCodec isn't completely stable prior to 4.3 anyway, so it may still not work on all devices.

Related

Garbled result with MediaCodec when using custom resolutions with Qualcomm codecs

I’m encoding a set of JPEGs into mp4 by using the MediaCodec API. The photos could have any resolution, but I adjust all photos to be multiple of 16 to ensure they have a compatible size with MediaCodec, and make sure they are within the supported sizes returned by the codec video capabilities.
I’ve found out that in some old devices using the OMX.qcom.video.encoder.avc codec, some resolutions produce garbled videos, as seen in the next samples with different aspect ratios. The problem does not happen when using standard aspect ratios such as 16:9, 4:3, etc, only when using custom ones.
Original
Result
Original
Result
Investigating the issue, I discovered through another user’s question that this could be related to the fact old Qualcomm devices require the Y plane of the YUV data to be aligned at a 2K boundary. But, I’m not working with YUV data at all directly, instead I’m using an Input Surface and rendering through OpenGL.
My guess is that maybe the Codec underlaying system for the Input Surface works anyway with YUV buffers and the Qualcomm codec handles all the conversion, is just a guess. But if so, then, is there any formula I could use to adjust the resolution and align it to such boundary requirement, even if it will produce some cropping? Or if I am misled about my guess, then what could be causing such issue?
See the next accepted answer for the statement about the 2K boundary alignment.
How to get stride and Y plane alignment values for MediaCodec encoder

Input buffer coding for Qualcomm's AVC encoder via Android's MediaCodec

I'm trying to capture Android's views as bitmaps and save them as .mp4 file.
I'm using MediaCodec to encode bitmaps and MediaMuxer to mux them into .mp4.
Using YUV420p color format I expect input buffers from MediaCodec to be of size resWidth * resHeight * 1.5 but Qualcomm's OMX.qcom.video.encoder.avc gives me more than that (no matter what resolution I choose). I believe that it wants me to do some alignment in my input byte stream but I have no idea how to find out what exactly it expects me to do.
This is what I get when I pack my data tightly in input buffers on Nexus 7 (2013) using Qualcomm's codec: https://www.youtube.com/watch?v=JqJD5R8DiC8
And this video is made by the very same app ran on Nexus 10 (codec OMX.Exynos.AVC.Encoder): https://www.youtube.com/watch?v=90RDXAibAZI
So it looks like luma plane is alright in faulty video but what happened with chroma plane is a mystery for me.
I prepared minimal (2 classes) working code example exposing this issue: https://github.com/eeprojects/MediaCodecExample
You can get videos shown above just by running this app (there will be same artefacts if your device utilizes Qualcomm's codec).
There are multiple ways of storing YUV 420 in buffers; you need to check the individual pixel format you chose. MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar and MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedPlanar are in practice the same, called planar or I420 for short, while the others, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedSemiPlanar and MediaCodecInfo.CodecCapabilities.COLOR_TI_FormatYUV420PackedSemiPlanar are called semiplanar or NV12.
In semiplanar, you don't have to separate planes for U and V, but you have one single plane with pairs of interleaved U,V.
See
https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java (lines 925-949) for an example on how to fill in the buffer for the semiplanar formats.

Rendering YUV format in android?

I'm trying to stream video in android through ffmpeg,the output which i am getting after the decoding is YUV format.Is it possible to render YUV image format directly in the Android screen?
Yes and no.
The output of the camera and hardware video decoders is generally YUV. Frames from these sources are generally sent directly to the display. They may be converted by the driver, typically with a hardware scaler and format converter. This is necessary for efficiency.
There isn't an API to allow an app to pass YUV frames around the same way. The basic problem is that "YUV" covers a lot of ground. The buffer format used by the video decoder may be a proprietary internal format that the various hardware modules can process efficiently; for your app to create a surface in this format, it would have to perform a conversion, and you're right back where you were performance-wise.
You should be able to use GLES2 shaders to do the conversion for you on the way to the display, but I don't have a pointer to code that demonstrates this.
Update: an answer to this question has a link to a WebRTC source file that demonstrates doing the YUV conversion in a GLES2 shader.

how to dump yuv buffer from omxcodec in android

I followed the steps mentioned in below link to get gralloc buffer. But how to get size of buffer?
How to dump YUV from OMXCodec decoding output
for testing i took length as, width x height x 1.5 as OMXCodec decoder output format was OMX_QCOM_420Planer32m.
But when i write YUV frame to file, my yuv viewer is not able to render it. then i tried length from range_length(). for this also same issue.
Also i converted the file to JPEG, it is not proper as YUV it self is wrong.
please help me. how to dump yuv buffer to file. i'm testing in Kitkat (moto g) and in ICS (Samsung tablet)
thank you,
Raghu
As noted in links from the article you linked to, the buffer is not in a simple planar YUV format, but rather in a Qualcomm-proprietary format.
You need code that knows how to decode it. This accepted answer to this question seems to have it, though I don't know how stable the format definitions are, and there are some comments that suggest the code as posted isn't quite right.

Android MediaCodec: decode, process each frame, then encode

The example DecodeEditEncodeTest.java in bigflake.com, demonstrates an example of simple editing (swap the color channels using OpenGL FRAGMENT_SHADER).
Here, I want to do some complicated image processing (such as adding something in it) of each frame.
In this way, does it mean I cannot use surface. Instead, I need to use buffer?
But from EncodeDecodeTest.java, it says:
(1) Buffer-to-buffer. Buffers are software-generated YUV frames in ByteBuffer objects, and decoded to the same. This is the slowest (and least portable) approach, but it allows the application to examine and modify the YUV data.
(2) Buffer-to-surface. Encoding is again done from software-generated YUV data in ByteBuffers, but this time decoding is done to a Surface. Output is checked with OpenGL ES, using glReadPixels().
(3) Surface-to-surface. Frames are generated with OpenGL ES onto an input Surface, and decoded onto a Surface. This is the fastest approach, but may involve conversions between YUV and RGB.
If I use Buffer-to-buffer, from what the above says, it is the slowest and least portable. How slow would it be?
Or I use surface-to-surface, and read pixels out from the surface.
Which way is more feasible?
Any example available?

Categories

Resources