Non-streamable video file created with MediaMuxer - android

I am using MediaCodec to encode video. Frames are coming through the camera preview callback to the MediaCodec instance (no Surface used). I am using JCodec library for muxing and I am able to stream produced video (video player is showing correct duration and I am able to change video position with seek bar).
Today I've tried to use MediaMuxer instead of JCodec and I've got video which still looks fine, but duration is absolutely incorrect (a few hours instead of one minute) and the seek bar is not working at all.
mediaMuxer = new MediaMuxer("/path/to/video.mp4", MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
The following code is lazily called when I receive MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
videoTrackIndex = mediaMuxer.addTrack(encoder.getMediaFormat());
mediaMuxer.start();
I am encoding the frames with the following code:
mediaMuxer.writeSampleData(videoTrackIndex, byteBuffer, bufferInfo);
byteBuffer and bufferInfo are coming directly from MediaCodec after some positioning stuff:
byteBuffer.position(bufferInfo.offset);
byteBuffer.limit(bufferInfo.offset + bufferInfo.size);
Presentation time is set correctly:
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, getWidth() * getHeight() * 1.5, System.nanoTime() / 1000, 0);
And at the end of the record I do:
mediaMuxer.stop();
mediaMuxer.release();
Logs:
I/MPEG4Writer﹕ setStartTimestampUs: 0
I/MPEG4Writer﹕ Earliest track starting time: 0
D/MPEG4Writer﹕ Stopping Video track
I/MPEG4Writer﹕ Received total/0-length (770/0) buffers and encoded 770 frames. - video
D/MPEG4Writer﹕ Stopping Video track source
D/MPEG4Writer﹕ Video track stopped
D/MPEG4Writer﹕ Stopping writer thread
D/MPEG4Writer﹕ 0 chunks are written in the last batch
D/MPEG4Writer﹕ Writer thread stopped
I/MPEG4Writer﹕ The mp4 file will not be streamable.
D/MPEG4Writer﹕ Stopping Video track
I guess the The mp4 file will not be streamable. signals about problem.
Update:
I've tested my app on another device (LG G2) which does more verbose logging. The same file is produced with huge duration. Logs are here and the video file is here.

Thanks to #fadden I was able to figure out the problem. I was actually sending my first frame with presentationTimeUs = 0. It happened because I was not handling frames with MediaCodec.BUFFER_FLAG_CODEC_CONFIG flag properly. I was actually feeding them to the muxer, but what I should have done is to skip them with the following code (as per example):
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
mBufferInfo.size = 0;
}

Related

Android oboe c++ Some sounds distorted on playback

I'm using the Android oboe library for high performance audio in a music game.
In the assets folder I have 2 .raw files (both 48000Hz 16 bit PCM wavs and about 60kB)
std_kit_sn.raw
std_kit_ht.raw
These are loaded into memory as SoundRecordings and added to a Mixer. kSampleRateHz is 48000:
stdSN= SoundRecording::loadFromAssets(mAssetManager, "std_kit_sn.raw");
stdHT= SoundRecording::loadFromAssets(mAssetManager, "std_kit_ht.raw");
mMixer.addTrack(stdSN);
mMixer.addTrack(stdFT);
// Create a builder
AudioStreamBuilder builder;
builder.setFormat(AudioFormat::I16);
builder.setChannelCount(1);
builder.setSampleRate(kSampleRateHz);
builder.setCallback(this);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
LOGD("After creating a builder");
// Open stream
Result result = builder.openStream(&mAudioStream);
if (result != Result::OK){
LOGE("Failed to open stream. Error: %s", convertToText(result));
}
LOGD("After openstream");
// Reduce stream latency by setting the buffer size to a multiple of the burst size
mAudioStream->setBufferSizeInFrames(mAudioStream->getFramesPerBurst() * 2);
// Start the stream
result = mAudioStream->requestStart();
if (result != Result::OK){
LOGE("Failed to start stream. Error: %s", convertToText(result));
}
LOGD("After starting stream");
They are called appropriately to play with standard code (as per Google tutorials) at required times:
stdSN->setPlaying(true);
stdHT->setPlaying(true); //Nasty Sound
The audio callback is standard (as per Google tutorials):
DataCallbackResult SoundFunctions::onAudioReady(AudioStream *mAudioStream, void *audioData, int32_t numFrames) {
// Play the stream
mMixer.renderAudio(static_cast<int16_t*>(audioData), numFrames);
return DataCallbackResult::Continue;
}
The std_kit_sn.raw plays fine. But std_kit_ht.raw has a nasty distortion. Both play with low latency. Why is one playing fine and the other has a nasty distortion?
I loaded your sample project and I believe the distortion you hear is caused by clipping/wraparound during mixing of sounds.
The Mixer object from the sample is a summing mixer. It just adds the values of each track together and outputs the sum.
You need to add some code to reduce the volume of each track to avoid exceeding the limits of an int16_t (although you're welcome to file a bug on the oboe project and I'll try to add this in an upcoming version). If you exceed this limit you'll get wraparound which is causing the distortion.
Additionally, your app is hardcoded to run at 22050 frames/sec. This will result in sub-optimal latency across most mobile devices because the stream is forced to upsample to the audio device's native frame rate. A better approach would be to leave the sample rate undefined when opening the stream - this will give you the optimal frame rate for the current audio device - then use a resampler on your source files to supply audio at this frame rate.

Streaming video of android surface

I have a problem with rtmp streaming of android surface to a client application. My solution has a very big latency, because my surface is not producing frames 60 times a second, it can produce it in any time (once in 30 seconds for example). So I want to show each new produced frame to the client immediately.
Android is pushing every frame, it looks fine. Client app (jwplayer or vlc) receives it, but it waiting for something. It becomes showing video only after receiving a number of frames. But I need to see every incoming frame on the client side when it just have been received.
How it is working now:
I have a Surface object, obtained from MediaCodec class. MediaCodec is set for h264 video encoding.
MediaCodec mEncoder;
.....
MediaFormat format = MediaFormat.createVideoFormat("video/avc", width, height);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
format.setInteger(MediaFormat.KEY_BIT_RATE, videoBitrate);
format.setInteger(MediaFormat.KEY_FRAME_RATE, videoFramePerSecond);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, iframeInterval);
try {
mEncoder = MediaCodec.createEncoderByType("video/avc");
} catch (IOException e) {
e.printStackTrace();
}
mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mSurface = mEncoder.createInputSurface();
if (mSurfaceCallback!=null)
mSurfaceCallback.onSurfaceCreated(mSurface);
mEncoder.start();
Sometimes android is drawing to the surface. I can't control the rate of this drawings. Also I can't draw anything to that surface. When something is changed on the surface, MediaCodec is producing new byteBuffer with h264 frame. I send this frame by rtmp.
On a client side I have html page with jwplayer
<pre id="myElement"></pre>
<script>
var playerInstance = jwplayer("myElement");
playerInstance.setup({
file:"rtmp://127.0.0.1:1935/live/stream",
height: 800,
width: 480,
autostart: true,
controls: false,
rtmp: {
bufferlength: 0.1
}
});
</script>
I've tried to change iframeInterval, fps of encoding, bufferlength.. Nothing is really helpful.
Is there is any possibility to show incomming frames immeditely?
What do you mean?
If I understood right - you have:
vlc(client) ---- rtmp protocol ---- android (producer)
You encode video from something (may be camera) using MediaCodec and in vlc there is time latency? right?
At first - what are you using - direct input buffer or MediaCodec.Callback() ?
In callback - you can check every frame in onOutputBufferAvailable and calculate time from one frame to another - this will show you - is this problem on android side.
Then you can try to resolve frame transef problem
You can use WireShark to determine frame sending timing and chek - may be this is network problem
Than - vlc and other players try to fill some internal buffer and only after this starting to show video. Try to turn of vlc buffer (https://forum.videolan.org/viewtopic.php?t=40408). Then - there is common that vlc waiting for IDR frame. You can set interval for sending IDR frames in code
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, iframeInterval);
iframeInterval in seconds (try to set 1 second)
(this will increase streaming size)
Sorry for my bad english
You can hopefully generate video frames at constant rate, even more than 20 fps to produce smooth video with acceptable latency. h264 encoder will handle a stable picture (one changing once in ~30 sec) gracefully, and when there is no change, frame size will be minimal.

Android MediaCodec appears to buffer H264 frames

I'm manually reading a RTP/H264 stream and pass the H264 frames to the Android MediaCodec. I use the "markerBit" as a border for the frames. The MediaCodec is tied to a OpenGL Texture (SurfaceTexture).
In general everything works fine. But the Decoder appears to buffer frames. If I put a frame in the decoder it is not rendered immediately to the texture. After I put 2-3 frames more in the decoder the first frame is rendered to the texture.
I'm implementing against Android 4.4.4.
private static final int INFINITE_TIMEOUT = -1;
private static final int TIMEOUT_OUTPUT_BUFFER_MEDIA_CODEC = 1000;
...
int bufferIndex = codec.dequeueInputBuffer(INFINITE_TIMEOUT);
if (bufferIndex < 0) {
throw new RuntimeException("Error");
}
ByteBuffer inputBuffer = inputBuffers[bufferIndex];
inputBuffer.clear();
// Copy H264 data to inputBuffer
h264Frame.fill(inputBuffer);
codec.queueInputBuffer(bufferIndex, 0, inputBuffer.position(), 0, 0);
drainOutputBuffers();
...
and
private boolean drainOutputBuffers() {
MediaCodec.BufferInfo buffInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = codec.dequeueOutputBuffer(buffInfo, TIMEOUT_OUTPUT_BUFFER_MEDIA_CODEC);
if (outputBufferIndex >= 0) {
codec.releaseOutputBuffer(outputBufferIndex, true);
return true;
}
switch (outputBufferIndex) {
case MediaCodec.INFO_TRY_AGAIN_LATER:
LOG.debug("Could not dequeue output buffer. Try again later");
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
LOG.warn("The output format has changed.");
break;
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
LOG.warn("The output buffers has changed.");
break;
default:
LOG.warn("The output buffer index was negative: {}", outputBufferIndex);
}
return false;
}
On the rendering side I use the "onFrameAvailable" callback for checking if I have to update the texture on the openGl Thread. The flag I use for checking is guarded by a lock (synchronized).
I suspect that the presentation timestamp may influence the rendering. But I set it to 0. Thus I assume the frame should be rendered without a delay.
I'd like to have the frame rendered to the texture without having to put additional frames.
From the MediaCodec documentation
The Executing state has three sub-states: Flushed, Running and
End-of-Stream. Immediately after start() the codec is in the Flushed
sub-state, where it holds all the buffers. As soon as the first input
buffer is dequeued, the codec moves to the Running sub-state, where it
spends most of its life. When you queue an input buffer with the
end-of-stream marker, the codec transitions to the End-of-Stream
sub-state. In this state the codec no longer accepts further input
buffers, but still generates output buffers until the end-of-stream is
reached on the output. You can move back to the Flushed sub-state at
any time while in the Executing state using flush().
You need to "queue an input buffer with the end-of-stream marker". Do this with the first frame you feed to the decoder (make sure it is a keyframe).
This point is to tell the decoder not to expect anymore frames and therefore begin playback immediately. Otherwise it's normal to feed 3 or 4 frames before seeing anything. This an expectation of all MPEG decoders and is not Android-related.
If anyone looking at this in 2021:
Mediacodec May or may not hold frames (of course. I-frames I mean). It totally depends on the device. If you want to be sure the frame is released, for example when you plan to decode a single frame, you can use
mediaCodec.stop
every time you put the data in input buffer so it releases the frame. Afterwards, you have to again start the mediaCodec for next frame.
Mediacodec decoder buffers 6-7 frames before outputting first decoded output frame. It seems to flaw in the mediacodec. This will be problem in the streaming application.
So far my debugging shows decoding H264 with mediacodec has 6-7 frames delay during start of the stream.

How to handle the PTS correctly using Android AudioRecord and MediaCodec as audio encoder?

I'm using AudioRecord to record the audio stream during a camera capturing process on Android device.
Since I want to process the frame data and handle audio/video samples, I do not use MediaRecorder.
I run AudioRecord in another thread with the calling of read() to gather the raw audio data.
Once I get a data stream, I feed them into an MediaCodec configured as an AAC audio encoder.
Here are some of my codes about the audio recorder / encoder:
m_encode_audio_mime = "audio/mp4a-latm";
m_audio_sample_rate = 44100;
m_audio_channels = AudioFormat.CHANNEL_IN_MONO;
m_audio_channel_count = (m_audio_channels == AudioFormat.CHANNEL_IN_MONO ? 1 : 2);
int audio_bit_rate = 64000;
int audio_data_format = AudioFormat.ENCODING_PCM_16BIT;
m_audio_buffer_size = AudioRecord.getMinBufferSize(m_audio_sample_rate, m_audio_channels, audio_data_format) * 2;
m_audio_recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, m_audio_sample_rate,
m_audio_channels, audio_data_format, m_audio_buffer_size);
m_audio_encoder = MediaCodec.createEncoderByType(m_encode_audio_mime);
MediaFormat audio_format = new MediaFormat();
audio_format.setString(MediaFormat.KEY_MIME, m_encode_audio_mime);
audio_format.setInteger(MediaFormat.KEY_BIT_RATE, audio_bit_rate);
audio_format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, m_audio_channel_count);
audio_format.setInteger(MediaFormat.KEY_SAMPLE_RATE, m_audio_sample_rate);
audio_format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audio_format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, m_audio_buffer_size);
m_audio_encoder.configure(audio_format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
I found that the first time of AudioRecord.read() takes longer time to return, while the successive read() have time intervals that are more close to the real time of audio data.
For example, my audio format is 44100Hz 16Bit 1Channel, and the buffer size of AudioRecord is 16384, so a full buffer means 185.76 ms. When I record the system time for each call of read() and subtracting them from a base time, I get the following sequence:
time before each read(): 0ms, 345ms, 543ms, 692ms, 891ms, 1093ms, 1244ms, ...
I feed these raw data to the audio encoder with the above time values as PTS, and the encoder outputs encoded audio samples with the following PTS:
encoder output PTS: 0ms, 185ms, 371ms, 557ms, 743ms, 928ms, ...
It looks like that the encoder treats each part of data as having the same time period. I believe that the encoder works correctly since I give it raw data with the same size (16384) every time. However, if I use the encoder output PTS as the input of muxer, I'll get a video with audio content being faster then video content.
I want to ask that:
Is it expected that the first time of AudioRecord.read() blocks longer? I'm sure that the function call takes more than 300ms while it only records 16384 bytes as 186ms. Is this also an issue that depends on device / Android version?
What should I do to achieve audio/video synchronization? I have a workaround to measure the delay time of the first call of read(), then shift the PTS of audio samples by the delay. Is there another better way to handle this?
Convert the mono input to stereo. I was pulling my hair out for some time before I realised the AAC encoder exposed by MediaCoder only works with stereo input.

What is the best way to achieve Audio Video Synchronization in Android Based Media Player Application using MediaCodec API?

I'm trying to implement a Media Player in android using the MediaCodec API.
I've created three threads
Thread 1 : To de-queue the input buffers to get free indices and then queuing the audio and video frames in respective codec's input buffer
Thread 2 : To de-queue the audio codec's output buffer and render it using AudioTrack class' write method
Thread 3 : To de-queue the video codec's output buffer and render it using releaseBuffer method
I'm facing a lot of problem in achieving synchronization between audio and video frames. I never drop audio frames and before rendering video frames I check whether the decoded frames are late by more than 3omsecs, if they are I drop the frame, if they are more than 10ms early I don't render the frame.
To find the difference between audio and video I use following logic
public long calculateLateByUs(long timeUs) {
long nowUs = 0;
if (hasAudio && audioTrack != null) {
synchronized (audioTrack) {
if(first_audio_sample && startTimeUs >=0){
System.out.println("First video after audio Time Us: " + timeUs );
startTimeUs = -1;
first_audio_sample = false;
}
nowUs = (audioTrack.getPlaybackHeadPosition() * 1000000L) /
audioCodec.format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
}
} else if(!hasAudio){
nowUs = System.currentTimeMillis() * 1000;
startTimeUs = 0;
}else{
nowUs = System.currentTimeMillis() * 1000;
}
if (startTimeUs == -1) {
startTimeUs = nowUs - timeUs;
}
if(syslog){
System.out.println("Timing Statistics:");
System.out.println("Key Sample Rate :"+ audioCodec.format.getInteger(MediaFormat.KEY_SAMPLE_RATE) + " nowUs: " + nowUs + " startTimeUs: "+startTimeUs + " timeUs: "+timeUs + " return value :"+(nowUs - (startTimeUs + timeUs)));
}
return (nowUs - (startTimeUs + timeUs));
}
timeUs is the presentation time in micro-seconds of the video frame. nowUs is supposed to contain the duration in micro-seconds for which audio has been playing. startTimeUs is the initial difference between audio and video frames which has to be maintained always.
The first if block checks, if there is indeed an audio track and it has been initialized and sets the value of nowUs by calculating it from audiotrack
If there is no audio (first else) nowUs is set to SystemTime and the initial gap is set to zero. startTimeUs is initialized to zero in main function.
The if block in the synchronized block is used in case, first frame to be rendered is audio and audio frame joins later. first_audio_sample flag is initially set to true.
Please let me know if anything is not clear.
Also if you know of any open source link where media player of an a-v file has been implemented using video codec, that would be great.
If you are working on one of the latest releases of Android, you can consider retrieving the audioTimeStamp from AudioTrack directly. Please refer to this documentation for more details. Similarly, you could also consider retrieving the sampling rate via getSampleRate.
If you wish to continue with your algorithm, you could consider a relatively similar implementation in this native example. SimplePlayer implements a player engine by employing MediaCodec and has an a-v sync section too. Please refer to this section of code where the synchronization is performed. I feel this should help as a good reference.

Categories

Resources