Tegra3 -- High latency with hardware decoding - android

Hardware:Nexus 7
OS Version: Android 4.1.2
Problem:
I use OpenMAX IL to decode H.264 video stream(640x480). Between I give a raw H.264 frame and I receive a YUV frame, it is about 2~4 seconds. The YUV frame is correct.
Question:
Is it really need so much time? Or I have made wrong configurations?

Related

Higher decoding latency on Snapdragon devices compared to Exynos devices with h264

I made the observation that for devices in the same price range, Snapdragon-based decoders can have a much higher decoding latency than Exynos-based decoders. This is most noticable when decoding h264 streams with the value "pic_order_cnt_type" inside the SPS set to 0 instead of 2.
I am wondering if you have also observed this behaviour and if there is any fix for that ( I have already opened an issue here but no response so far)
Some technical details:
I have built a simple example app that uses AMediaCodec to decode h264 streams. It uploads the "decoding latency", as a test result into a Firestore database. code
Here is a comparison of the decoding latency for different h264 streams on a Pixel 4 (using snapdragon decoder) and a Samsung Galaxy Note 9 (using exynos decoder):
Pixel 4
Galaxy Note 9
As you can see, for the video called jetson/h264/2/test.h264, the decoding time on the snapdragon device is ~21 times higher than for the Samsung device. pic_order_cnt_type==0 for this stream. However, on other streams the difference in decoding time is insignificant. (they all use pic_order_cnt_type==2)
The main parameter determining if the snapdragon decoder enters a "low-latency decoding path" seems to be the value pic_order_cnt_type mentioned above.
If I understand the h264 specs correctly, if this value is set to 2 picture re-ordering is impossible (no buffered frames). If it is set to 0, picture re-ordering is possible, but not neccessarily used by the encoder. However, the snapdragon decoder doesn't differentiate between "possible" and "actually used by the encoder", leading to the big decoding latency difference.
I was able to reduce the decoding latency on snapdragon by manipulating the bitstream before sending it to the decoder (adding VUI with num_reorder_frames=0 and max_dec_frame_buffering=0) but it never results in 0 buffered frames, only less buffered frames.

Predict output sample size from MediaCodec for audio in Android NDK

Is there any way to get or set output sample size using MediaCodec in NDK?
I am using AMediaCodec_createDecoderByType to decode different codec. For some of the codecs its fixed like for mp3 it gives 1152, for aac 1024, so on but in case of .wav i am getting different sample size for different device. it varies from 4096 to 16384 (uncertain).
Since I am maintaining a ring buffer blocks post resampling, and playing from that buffer, I want to know at least the maximum sample size initially to allocate same size accordingly. If there is any api or tricks to know it, would be much helpful.
Thanks,

Is MediaCodec performance/latency affected by the frame resolution on Android?

I am creating an application that receives an RTSP streaming that is H264 encoded and the application has to decode it using MediaCodec to be finally displayed. Something similar to the one mentioned in this other post.
I wonder if some knows if mediaCodec performance will be affected by a change on resolution on the RTSP streaming, specifically the latency and ARM consumption.
I know MediaCodec is accelerated by Hardware so I would expect a pretty stable latency and ARM consumption but if someone has some numbers on performance it would be great to compare if my application could be doing something silly causing more load on the ARM.
I can't test this myself right now because I am trying to figure out the correct format for the buffers that media codec would expect for H264 once I get the RTSP payload.

Adjust the buffer size of Mediacodec's decoder on Android 4.2

I'm decoding a H.264 stream on Android 4.2 using Mediacodec. Unfortunately, the decoder always buffers 6-10 frames, which lead to annoying latency, and Android does not provide any API to adjust buffer size. So my question is, how to modify the Android source code (or the OMX driver) in order to reduce the buffer size for realtime video streaming?
Generally speaking, you don't. The number of buffers in the queue is determined by the codec. Different devices, and different codecs on the same device, can behave differently.
Unless you're using the software AVC codec, the codec implementation is provided as a binary by the hardware OEM, so there's no way to modify it (short of hex-editing).

Does MediaCodec truncate incoming packets for decoding?

I'm using MediaCodec to decode h264 packets that were encoded with ffmpeg. When I decode with ffmpeg, the frames display fine. However, when I decode with the MediaCodec hardware decoder I sometimes get black bars that show up in the middle of the frame. This only happens if the encoding bitrate is set high enough (say upwards of 4000000) so that any given AVPacket size becomes above 95000 or so. It seems like MediaCodec (or the underlying decoder) is truncating the frames. Unfortunately, I need the quality so the bitrate can't be turned down. I've verified that the frames aren't being truncated elsewhere, and I've tried setting MediaFormat.KEY_MAX_INPUT_SIZE to something higher.
Has anyone ran into this issue or know of a way I can work around it?
I've attached an image of random pixels that I rendered in OpenGL and then decoded on my Galaxy S4.
I figured out what the issue was. I had to increase an incoming socket buffer in order to receive all the packet data. Since I was using a Live 555 RTSP client, I used the increaseReceiveBufferTo() function to do so.

Categories

Resources