I'm trying to decode h.264 stream using Android's MediaCodec interface. Everything is working just fine on my test devices, but on one customer device that I dont' have access to (Samsung Tab S) there are strange issues.
When I decode the stream I don't send any SPS/PPS NALs or an initial frame. I just start pushing the data from the live stream, chopped into blocks ending with a 0x09 NAL and the decoder will synchronize itself nicely without problems quite quickly.
The issue with at least this one device is that when I get a BufferInfo from the decoder it will claim it decoded 1413120 bytes of data, but the buffer size is only 1382400! So of course if I even try to get that much data out of the buffer it will crash.
The video is 1280x720 and is decoded into NV12, so the buffer size is just fine. The reported decoded output size isn't. If I force the size as 1382400 and convert the NV12 into RGB I get almost correct picture. The first 32 lines have a strong green color and the blue channel is shifted by quite a lot. This means that the UV block is decoded partially wrong on this device.
Has anyone run into this kind of an issue before? I have recorded the raw h264 stream from that specific device and it plays just fine with no green blocks or color shifts.
Should I really set up SPS/PPS and an initial frame before starting the streaming? The stream seems to contain everything needed since the decoder realizes the resolution to be correct, sets up buffers and decodes on every other device I've tested except this one. So I'm just wondering if Samsung has something special going on.
Another application decoding the same stream shows it without issues, but as far as I know they use ffmpeg internally, not MediaCodec. I would rather use built-in system codecs if possible.
Here is an example of the result. Don't have an image of just the stream, do note that the frame is rotated. The Y component in the green area is just fine and on the white block on the right you can clearly see the blue shift.
Edit: Even if I start the decoder with the SPS/PPS blocks in csd-0 the color problems persist. So it isn't due to that.
Also managed to test the exact stream with another device. No green bar, no color shifts. So it's a problem with the codec in that particular device/model.
I had similar issues in the past (specifically on Samsung devices) and if I remember correctly it was due to the missing SPS/PPS data. You must feed in the SPS/PPS data if you want consistent results.
Not a direct solution to your issue but a possible workaround is to use an alternative decoder (if present) when running on that particular device.
I'm not sure how you are instantiating your decoder but often people use the mime type like so:
decoder = MediaCodec.createDecoderByType("video/avc");
The device will then choose the preferred decoder (probably hardware).
You can alternatively instantiate a specific decoder like so:
decoder = MediaCodec.createByCodecName("OMX.google.h264.decoder");
// OR
decoder = MediaCodec.createByCodecName("OMX.qcom.video.decoder.avc");
In my experience, most devices have at least 2 different H264 decoders available and you may find that an alternative decoder on this device performs without faults.
You can list all the available codecs using the following code:
static MediaCodecInfo[] getCodecs() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
MediaCodecList mediaCodecList = new MediaCodecList(MediaCodecList.ALL_CODECS);
return mediaCodecList.getCodecInfos();
} else {
int numCodecs = MediaCodecList.getCodecCount();
MediaCodecInfo[] mediaCodecInfo = new MediaCodecInfo[numCodecs];
for (int i = 0; i < numCodecs; i++) {
MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
mediaCodecInfo[i] = codecInfo;
}
return mediaCodecInfo;
}
}
We faced a similar issue of Out of memory when trying to play on Amazon Fire stick (AFTMM) using h265 decoder at 480p and 1440p layer. The issue was coming for both SDR and HDR playback. While this definitely is a device specific issue, there could be some workarounds like reducing the ref value while encoding and/or reducing CRF as well. This works for us.
Related
I am using mediacodec to decodec a h264 stream on samsung S6, android 5.1.1, found the input buffer to mediacodec must start with "0001"(and don't need to set pps, sps), or the ACodec will report error.
I also tried to use mediaextractor to play a mp4 file, it works fine, but the buffer to mediacodec is not start with "0001".
I don't know why decodec a h264 stream has such limitation, currently i need to analyze the stream from socket, and cut the data into small packages(each package start with 0001) and then give them to mediacodec, but it is inefficient.
MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 1024, 1024);
Some specific decoders may also be able to decode H264 NAL units in the "mp4" format (with a different kind of startcode), but that's not guaranteed across all devices.
It may be that Samsung's version of MediaExtractor returns it in this format, if they know that their own decoder can handle it. There is at least earlier precedent that Samsung did the same, nonstandard thing with timestamps with their version of MediaExtractor, see e.g. https://code.google.com/p/android/issues/detail?id=74356.
(Having MediaExtractor return data that only the current device's decoder can handle is wrong IMO, though, since one may want to use MediaExtractor to read a file but send the compressed data over the network to another device for decoding, and in these cases, returning data in a nonstandard format is wrong.)
As fadden wrote, MediaCodec operates on full NAL units though, so you need to provide data in this format (even if you think it feels inefficient). If you receive data over a socket in a format where this information (about frame boundaries) isn't easily available, then that's an issue with your protocol format (e.g., implementing RTP reception is not easy!), not with MediaCodec itself - it's a quite common limitation to need to have full frames before decoding, instead of being able to feed random chunks until you have a full frame. This shouldn't be inefficient unless your own implementation of it is inefficient.
In general android will expect nal units for each input. For some devices i have found that setting the csd-0/1 on the media-format for h264 to not work consistently. But if you feed each of the parameters sets as input buffers the media-codec will pick it up as a format-change.
int outputBufferIndex = NativeDecoder.DequeueOutputBuffer (info, 1000);
if (outputBufferIndex == (int)MediaCodec.InfoOutputFormatChanged) {
Console.WriteLine ("Format changed: {0}", NativeDecoder.OutputFormat);
} else if (outputBufferIndex >= 0) {
CodecOutputBufferAvailable (NativeDecoder, outputBufferIndex, info);
}
Also note it is mandatory for Nexus and some other samsung devices to set:
formatDescription.SetInteger(MediaFormat.KeyWidth, SelectedPalette.Value.Width);
formatDescription.SetInteger(MediaFormat.KeyHeight, SelectedPalette.Value.Height);
formatDescription.SetInteger(MediaFormat.KeyMaxInputSize, SelectedPalette.Value.Width * SelectedPalette.Value.Height);
I am lucky in my situation i can query these resolutions. But you can parse the resolution manually from SPS and PPS nal units.
// NOTE i am using Xamarin here. But the calls and things are pretty much the same. I am fairly certain there are bugs in the iOS VideoToolbox Xamarin Wrapper so yeah.. Keep that in mind if your ever considering Xamarin for video decoding. Its great for everything but anything thats slightly more custom or low-level.
I'm having a very difficult time with MediaCodc. I've used it previously to decode a raw h.264 stream and learned a significant amount. At least I thought I had.
My stream is h.264 in Annex B format. Looking at the raw data, the structure of my NAL packet code types are as follows:
[0x09][0x09][0x06] and then 8 packets of [0x21].
[0x09][0x09][0x27][0x28][0x06] and then 8 packets of [0x21].
This is not how I am receiving them. I am attempting to build a complete Access Unit from these raw NAL unit types.
First thing that is strange to me is the double [0x09] which is the Access Unit Delimiter packet. I am pretty sure the h.264 document specifies only 1 AUD per Access Unit. BTW, I am able to record the raw data and play it using ffmpeg with, and without, the extra AUD. For now, I am detecting this message and stripping the first one off before sending the entire Access Unit to the MediaCodec.
Second thing is, I have hardcoded the SPS/PPS byte array messages [0x27/0x28] and I am setting these in the MediaFormat used to initialize the MediaCodec, similar to:
format.setByteBuffer("csd-0", ByteBuffer.wrap( mySPS ));
format.setByteBuffer("csd-1", ByteBuffer.wrap( myPPS ));
My video stream provider vendor tells me the video is 1280 x 720, however, when I convert it to an mp4 file, the metadata says its 960 x 720. Another oddity.
Changing these different parameters around, I am still unable to get a valid buffer index in my Thread that processes the decoder output (dequeueOutputBuffer returns -1). I have also varied the timeout for this to no avail. If I manually set the SPS/PPS as the first packet NOT using the example above, I do get the -3 "output buffers have changed" which is meaningless since I am using API 20. But everything else I get returned is -1.
I've read about the Emulation Prevention Byte encoding of h.264. I am able to strip this byte out and send to the MediaCodec. Doesn't seem to make a difference. Also, the documentation for MediaCodec doesn't explicitly say if the code is expecting the EPB to be stripped out or not ... ?
Other than the video frame resolution, the only other thing that is different than my previous success is the existence of the SEI packet type [0x06]. I'm not sure if I should be doing something special with this or not?
I know there are a number of folks who have used MediaCodec have had issues with it, mostly because the documentation is not very good. Can anyone offer any other advice as to what I could be doing wrong?
I'm trying to implement an Android app that records a video, while also writing a file containing the times at which each frame was taken. I tried using MediaRecorder, got to a point where I can get the video saved, but I couldn't find a way to get the timestamps. I tried doing something like:
while (previous video file length != current video file length)
write current time to text file;
but this did not seem to work, as file length doesn't seem to be updated frequently enough (or am I wrong?).
I then tried using OpenCV and managed to capture the images frame by frame (and therefore getting timestamps was easy), but I couldn't find a way to join all the frames to one video. I saw answers referring to using NDK and FFmpeg, but I feel like there should be an easier solution (perhaps similar to the one I tried at the top?).
You could use MediaCodec to capture the video, but that gets complicated quickly, especially if you want audio as well. You can recover timestamps from a video file by walking through it with MediaExtractor. Just call getSampleTime() on each frame to get the presentation time in microseconds.
This question is old, but I just needed the same thing. With the camera2 api you could stream to multiple targets. So you can stream from one and the same physical camera to the MediaRecorder and at the same time to an ImageReader. The ImageReader has the ImageReader.OnImageAvailableListener, the "onImageAvailable" callback you can use for querying the frames for their timestamps:
image = reader.acquireLatestImage();
long cameraTime = image.getTimestamp();
Not sure whether streaming to mulitple targets requires certain hardware capabilities, though (such as multiple image processors). I guess, modern devices have like 2-3 of them.
EDIT: This looks like every camera2 capable device can stream to up to 3 targets.
EDIT: Apparently, you can use an ImageReader for capturing the individual frames and their timestamps, and then use an ImageWriter to provide those images to downstream consumers such as a MediaCodec, which records the video for you.
Expanding on fadden's answer, the easiest way to go is to record using MediaRecorder (the Camera2 example helped me a lot) and after that extracting timestamps via MediaExtractor. I want to add a working MediaExtractor code sample:
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(pathToVideo);
do {
long sampleMillis = extractor.getSampleTime() / 1000;
// do something with sampleMillis
}
while (extractor.advance() && extractor.getSampleTime() != -1);
I am working on an implementation of one of the Android Test Cases regarding previewTexture recording with the new MediaCodec and MediaMuxer API's of Android 4.3.
I've managed to record the preview stream with a framerate of about 30fps by setting the recordingHint to the camera paremeters.
However, I ran into a delay/lag problem and don't really know how to fix that. When recording the camera preview with quite standard quality settings (1280x720, bitrate of ~8.000.000) the preview and the encoded material suffers from occasional lags. To be more specific: This lag occurs about every 2-3 seconds and takes about 300-600ms.
By tracing the delay I was able to figure out the delay comes from the following line of code in the "drainEncoder" method:
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
This line is called in a loop if the encoder has data available for muxing. Currently I don't record audio so only the h264 streams is converted to a mp4 format by the MediaMuxer.
I don't know if this has something to do with that delay, but it always occurs when the loop needs two iterations to dequeue all available data of the encoder (to be even more specific it occurs always in the first of these two iterations). In most cases one iteration is enough to dequeue the encoder.
Since there is not much information online about these new API's any help is very appreciated!
I suspect you're getting bitten by the MediaMuxer disk write. The best way to be sure is to run systrace during recording and see what's actually happening during the pause. (systrace docs, explanation, bigflake example -- as of right now only the latter is updated for Android 4.3)
If that's the case, you may be able to mitigate the problem by running the MediaMuxer instance on a separate thread, feeding the H.264 data to it through a synchronized queue.
Do these pauses happen regularly, every 5 seconds? The CameraToMpegTest example configures the encoder to output an I-frame every 5 seconds (with an expected frame rate of 30fps), which results in a full-sized frame being output rather than tiny deltas.
As #fadden points out, this is a disk write issue that occurs mostly on devices with lower writing flash speeds or if you try to write to the SD card.
I have written a solution on how to buffer MediaMuxer's write in a similar question here.
I'm using MediaCodec to decode h264 packets that were encoded with ffmpeg. When I decode with ffmpeg, the frames display fine. However, when I decode with the MediaCodec hardware decoder I sometimes get black bars that show up in the middle of the frame. This only happens if the encoding bitrate is set high enough (say upwards of 4000000) so that any given AVPacket size becomes above 95000 or so. It seems like MediaCodec (or the underlying decoder) is truncating the frames. Unfortunately, I need the quality so the bitrate can't be turned down. I've verified that the frames aren't being truncated elsewhere, and I've tried setting MediaFormat.KEY_MAX_INPUT_SIZE to something higher.
Has anyone ran into this issue or know of a way I can work around it?
I've attached an image of random pixels that I rendered in OpenGL and then decoded on my Galaxy S4.
I figured out what the issue was. I had to increase an incoming socket buffer in order to receive all the packet data. Since I was using a Live 555 RTSP client, I used the increaseReceiveBufferTo() function to do so.