I use MediaCodec to encode camera data, and when I use it on a device named Vivo X5 Pro(android 5.0, API 21),the data's size MediaCodec encoded is above 90000, and normally on other devices the size is about 15000.However I change the params of media format,it also dose't work.
I also find that when I get the format with method MediaCodec.getOutputFormat() on Vivo X5 Pro(android 5.0, API 21),it has 7 data, one more than other normal case,it named "buffer-size" and value is 1048576,is this will influence the MediaCodec encode? How can I use Vivo X5 Pro(android 5.0, API 21) to encode a normal data size? Thank for help!!!
MediaCodec mEncoder = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 320, 568);
format.setInteger(MediaFormat.KEY_BIT_RATE, 320 * 568 * 2);
format.setInteger(MediaFormat.KEY_FRAME_RATE, 20);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1);
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 320 * 568);
PS:However I change the params Vivo X5 Pro(android 5.0, API 21) encode data size still above 90000.
Make sure that you pass proper timestamps for the input frames, in the right unit (microseconds), otherwise the encoder can allocate too few/many bits per frame - see https://stackoverflow.com/a/20663056/3115956 for a longer explanation of this.
Have you tried lowering/raising the bitrate parameter to see if it actually changes the output size? Otherwise the encoder might just be broken in the sense that it doesn't respect this parameter. (Also, the size of the first output packet might not totally reflect what the total average bitrate of the output stream is. So even if the first frame might be a bit too big, the later ones can potentially make up for it.)
Related
I am tying to limit the frame rate to 5-10 fps.
The encoded frame are being sent though web-socket connection and my goal is to limit bandwidth to 1Mbps while having good quality for larger resolution.
my current attempt is:
val format =
MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_VP8, size.width, size.height)
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, 1000000);
format.setInteger(MediaFormat.KEY_BITRATE_MODE, MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_CBR_FD);
format.setInteger(MediaFormat.KEY_FRAME_RATE, 1);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
format.setInteger(MediaFormat.KEY_MAX_FPS_TO_ENCODER, 1);
encoderInput = mEncoder!!.createInputSurface()
encoderInput.setFrameRate(1f,Surface.FRAME_RATE_COMPATIBILITY_FIXED_SOURCE,Surface.CHANGE_FRAME_RATE_ALWAYS)
I tired both setting the frame rate in the format and on the surface but it is still not limiting it. based on this discussion MediaCodec KEY_FRAME_RATE seems to be ignored
The bandwidth does follow the bit rate set in the format but not he frame rate so the quality is bad.
so how to reduce the frame rate ? preferably in a way that support older version of android.
I'm working with Android MediaCodec and use it for a realtime H264 encoding and decoding frames from camera. I use MediaCodec in synchronous manner and render the output to the Surface of decoder and everething works fine except that I have a long latency from a realtime, it takes 1.5-2 seconds and I'm very confused why is it so.
I measured a total time of encoding and decoding processes and it keeps around 50-65 milliseconds so I think the problem isn't in them.
I tried to change the configuration of the encoder but it didn't help and currently it configured like this:
val formatEncoder = MediaFormat.createVideoFormat("video/avc", 1920, 1080)
formatEncoder.setInteger(MediaFormat.KEY_FRAME_RATE, 30)
formatEncoder.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5)
formatEncoder.setInteger(MediaFormat.KEY_BIT_RATE, 1920 * 1080)
formatEncoder.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface)
val encoder = MediaCodec.createEncoderByType("video/avc")
encoder.configure(formatEncoder, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
val inputSurface = encoder.createInputSurface() // I use it to send frames from camera to encoder
encoder.start()
Changing the configuration of the decoder also didn't help me at all and currently I configured it like this:
val formatDecoder = MediaFormat.createVideoFormat("video/avc", 1920, 1080)
val decoder = MediaCodec.createDecoderByType("video/avc")
decoder.configure(formatDecoder , outputSurface, null, 0) // I use outputSurface to render decoded frames into it
decoder.start()
I use the following timeouts for waiting for available encoder/decoder buffers I tried to reduce their values but it didn't help me and I left them like this:
var TIMEOUT_IN_BUFFER = 10000L // microseconds
var TIMEOUT_OUT_BUFFER = 10000L // microseconds
Also I measured the time of consuming the inputSurface a frame and this time takes 0.03-0.05 milliseconds so it isn't a bottleneck. Actually I measured all the places where a bottleneck could be, but I wasn't found anything and I think the problem is in the encoder or decoder itself or in their configurations, or maybe I should use some special routine for sending frames to encoding/decoding..
I also tried to use HW accelerated codec and it's the only thing that helped me, when I use it the latency reduces to ~ 500-800 milliseconds but it still doesn't fit me for a realtime streaming.
It seems to me that the encoder or decoder buffers several frames before start displaying them on the surface and eventually it leads to the latency and if it really so then how can I disable bufferization or reduce the time of it?
Please help me I'm stucking on this problem for about half a year and have no idea how to reduce the latency, I'm sure that it's possible because popular apps like Telegram, Viber, WhatsApp etc. work fine and without latency so what's the secret here?
UPD 07.07.2021:
I still haven't found a solution to get rid of the latency. I've tried to change h264 profiles, increase and decrease I-frame inteval, bitrate, framerate, but result the same, the only thing that hepls a little to reduce the latency - downgrade the resolution from 1920x1080 to e.g. 640x480, but this "solution" doesn't suit me because I want to encode/decode a realtime video with 1920x1080 resolution.
UPD 08.07.2021:
I found out that if I change the values of TIMEOUT_IN_BUFFER and TIMEOUT_OUT_BUFFER from 10_000L to 100_000L it decreases the latency a bit but increases the delay of showing the first frame quite a lot after start encoding/decoding process.
It's possible your encoder is producing B frames -- bilinear interpolation frames. They increase quality and latency, and are great for movies. But no good for low-latency applications.
Key frames = I (interframes)
Predicted frames = P (difference from previous frames)
Interpolated frames = B
A sequence of frames including B frames might look like this:
IBBBPBBBPBBBPBBBI
11111111
12345678901234567
The encoder must encode each P frame, and the decoder must decode it, before the preceding B frames make any sense. So in this example the frames get encoded out of order like this:
1 5 2 3 4 9 6 7 8 13 10 11 12 17 17 13 14 15
In this example the decoder can't handle frame 2 until the encoder has sent frame 5.
On the other hand, this sequence without B frames allows coding and decoding the frames in order.
IPPPPPPPPPPIPPPPPPPPP
Try using the Constrained Baseline Profile setting. It's designed for low latency and low power use. It suppresses B frames. I think this works.
mediaFormat.setInteger(
"profile",
CodecProfileLevel.AVCProfileConstrainedBaseline);
I believe android h264 decoder have latency (at-least in most cases i've tried). Probably that's why android developers added PARAMETER_KEY_LOW_LATENCY from API level 30.
However I could decrease the delay some frames by querying for the output some more times.
Reason: no idea. It's just result of boring trial and errors
int inputIndex = m_codec.dequeueInputBuffer(-1);// Pass in -1 here bc we don't have a playback time reference
if (inputIndex >= 0) {
ByteBuffer buffer;
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.LOLLIPOP) {
buffer = m_codec.getInputBuffer(inputIndex);
} else {
ByteBuffer[] bbuf = m_codec.getInputBuffers();
buffer = bbuf[inputIndex];
}
buffer.put(frame);
// tell the decoder to process the frame
m_codec.queueInputBuffer(inputIndex, 0, frame.length, 0, 0);
}
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int outputIndex = m_codec.dequeueOutputBuffer(info, 0);
if (outputIndex >= 0) {
m_codec.releaseOutputBuffer(outputIndex, true);
}
outputIndex = m_codec.dequeueOutputBuffer(info, 0);
if (outputIndex >= 0) {
m_codec.releaseOutputBuffer(outputIndex, true);
}
outputIndex = m_codec.dequeueOutputBuffer(info, 0);
if (outputIndex >= 0) {
m_codec.releaseOutputBuffer(outputIndex, true);
}
You need to configure customized(or KEY_LOW_LATENCY if it is supported) low latency parameters for different cpu venders. It is a common problem for android phone.
Check this code https://github.com/moonlight-stream/moonlight-android/blob/master/app/src/main/java/com/limelight/binding/video/MediaCodecHelper.java
I'm using Oboe 1.2 in an audio android application. When I call getFramesPerBurst(), which gives the endpoint buffer size, I get expected results (240 frames) if the number of output channels is set to 2. However when I set 4 output channels, the value returned by getFramesPerBurst() is around 960 (!). Is that normal ? Is that a limitation of the hardware (I tested on 4 different devices though, with different os version) ? A limitation of Oboe ? I notice also that this value is different than the value given by the property PROPERTY_OUTPUT_FRAMES_PER_BUFFER of AudioManager from the AudioService.
oboe::AudioStreamBuilder builder;
if (!oboe::AudioStreamBuilder::isAAudioRecommended()){
builder.setAudioApi(oboe::AudioApi::OpenSLES);
}
builder.setSharingMode(oboe::SharingMode::Exclusive);
builder.setFormat(oboe::AudioFormat::Float);
builder.setChannelCount(4);
builder.setCallback(&_oboeCallback);
builder.setPerformanceMode(oboe::PerformanceMode::LowLatency);
oboe::Result result = builder.openStream(&_stream);
if (result == oboe::Result::OK) {
int framePerBurst = _stream->getFramesPerBurst(); // gives value around 960 for 4 channels, 240 for 2 channels
_stream->setBufferSizeInFrames(2*framePerBurst);
}
Unless you are connecting to an audio device which actually has 4 independent channels (e.g. a USB audio interface or DJ controller like this one) then your 4 channel stream will need to be mixed into an N channel stream where N is the number of channels in your audio device. This could be 2 (stereo) for headphones or 1 (mono) for a built-in speaker.
The mixer introduces latency and larger buffer sizes. This is the difference in buffer sizes you see when you request a channel count of 2 vs 4.
For the lowest latency always leave the channel count unspecified when creating the stream, then do any channel count conversion inside your own app. There's an example of this here.
How can i get a pcm mp4 with FFmpegFrameRecorder?I see it suppout pcm format.I try like below:
mFrameRecorder = new FFmpegFrameRecorder(mVideo, videoWidth, videoHeight, 1);
mFrameRecorder.setFormat("mp4");
mFrameRecorder.setSampleRate(sampleAudioRateInHz);
mFrameRecorder.setFrameRate(frameRate);
// Use H264
mFrameRecorder.setVideoCodec(avcodec.AV_CODEC_ID_MPEG4);
mFrameRecorder.setAudioCodec(avcodec.AV_CODEC_ID_PCM_S16LE);
// See: https://trac.ffmpeg.org/wiki/Encode/H.264#crf
/*
* The range of the quantizer scale is 0-51: where 0 is lossless, 23 is default, and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless or nearly so: it should look the same or nearly the same as the input but it isn't technically lossless.
* The range is exponential, so increasing the CRF value +6 is roughly half the bitrate while -6 is roughly twice the bitrate. General usage is to choose the highest CRF value that still provides an acceptable quality. If the output looks good, then try a higher value and if it looks bad then choose a lower value.
*/
mFrameRecorder.setVideoOption("crf", "28");
mFrameRecorder.setVideoOption("preset", "superfast");
mFrameRecorder.setVideoOption("tune", "zerolatency");
but it crashed with error msg :
library "/system/lib/libdl.so" ("/system/lib/libdl.so") needed or dlopened by "/system/lib/libnativeloader.so" is not accessible for the namespace: [name="classloader-namespace", ld_library_paths="", default_library_paths="/data/app/com.github.crazyorr.ffmpegrecorder-2/lib/arm:/data/app/com.github.crazyorr.ffmpegrecorder-2/base.apk!/lib/armeabi", permitted_paths="/data:/mnt/expand:/data/data/com.github.crazyorr.ffmpegrecorder"]
I intend to encode raw YUV Data to H264 data for which I'm using Android's MediaCodec interface. Below is the snippet I have in place for the same:
MediaCodec mEncoder = MediaCodec.createEncoderByType("video/avc");
MediaFormat mVideoFormat = MediaFormat.createVideoFormat("video/avc", 640 , 480);
mVideoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, 64000);
mVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 24);
mVideoFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
mVideoFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AVCProfileBaseline);
mEncoder.configure(mVideoFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mEncoder.start();
ByteBuffer[] mInputVideoBuffers = mEncoder.getInputBuffers();
ByteBuffer[] mOutputVideoBuffers = mEncoder.getOutputBuffers();
Although it works well on ARM devices, it fails on Intel x86 device I have (Samsung Tab 3) with below message:
E/ACodec(21756): [OMX.Intel.VideoEncoder.AVC] ERROR(0x80001001)
E/MediaCodec(21756): Codec reported an error. (omx error 0x80001001,
internalError -2147483648)
Any help on this would be useful.
Found the fix for the issue. I did not release the Codec before creating another one. Multiple instances of Encoder is not permissible on Samsung Tab 3 running on Intel x86 device. This behaviour is pretty inconsistent across android devices; taking into account other devices on which I've had tested my code.
The code shown won't work on some ARM devices. COLOR_FormatYUV420SemiPlanar isn't supported everywhere.
You need to detect the set of available color formats at runtime. See the isRecognizedFormat() method in EncodeDecodeTest. To pass CTS, the device must allow one of those formats. There's five listed, but really there's only two (planar and semi-planar), and they're not radically different.
For Intel Devices Encoder.getOutput is Crashing , Created a media format and directly supplied to the encoder
MediaFormat mVideoFormat = MediaFormat.createVideoFormat("video/avc", 640 , 480);
mVideoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
mVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, 64000);
mVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 24);
mVideoFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
mTrackIndex = mMuxer.addTrack(mVideoFormat );