Replacing audio track of .mp4 file - android

I currently want to replace the audio of .mp4 video file, with another .mp3 audio file.if replacing the audio track of original video is not possible,Please give me solution for how to keep both the audio tracks and let the user to select the desired audio track while playing.
I tried using MediaMuxer and mediaExtractor still i couldnt find out the correct solution.Can anyone please help me.
In media muxer sample program https://developer.android.com/reference/android/media/MediaMuxer.html
MediaMuxer muxer = new MediaMuxer("temp.mp4", OutputFormat.MUXER_OUTPUT_MPEG_4);
// More often, the MediaFormat will be retrieved from MediaCodec.getOutputFormat()
// or MediaExtractor.getTrackFormat().
MediaFormat audioFormat = new MediaFormat(...);
MediaFormat videoFormat = new MediaFormat(...);
int audioTrackIndex = muxer.addTrack(audioFormat);
int videoTrackIndex = muxer.addTrack(videoFormat);
ByteBuffer inputBuffer = ByteBuffer.allocate(bufferSize);
boolean finished = false;
BufferInfo bufferInfo = new BufferInfo();
muxer.start();
while(!finished) {
// getInputBuffer() will fill the inputBuffer with one frame of encoded
// sample from either MediaCodec or MediaExtractor, set isAudioSample to
// true when the sample is audio data, set up all the fields of bufferInfo,
// and return true if there are no more samples.
finished = getInputBuffer(inputBuffer, isAudioSample, bufferInfo);
if (!finished) {
int currentTrackIndex = isAudioSample ? audioTrackIndex : videoTrackIndex;
muxer.writeSampleData(currentTrackIndex, inputBuffer, bufferInfo);
}
};
muxer.stop();
muxer.release();
i am using android API 23,i am getting error saying getInputBuffer and isAudioSample cannot be resolved.
MediaFormat audioFormat=new MediaFormat(...);
What should i write inside the paranthesis.Where should i mention my video and audio file.I searched a lot Please give me some solution to this problem

Currently you can't write anything within the parenthesis. You have to use MediaFormatstatic methods:
MediaFormat audioFormat = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 160000, 1);
MediaFormat videoFormat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_MPEG4, 1280, 720);
The values that I added here are random. You have to specify:
For the audio: the myme type of the resulting file, the bitrate and amount of channels of the resulting audio
For the video: the myme type of the resulting file, the heigth and width of the resulting video.

Related

Adding metadata to a m4a/mp4 AAC container

I am encoding my PCM audio recorded data into AAC inside a m4a (mp4) container. It works fine, and now I want to add some metadata.
The ever-quoted documentation for this is this one. Yet, I do not understand this code: what is the condition for the while? What is KEY_MIME? How to set bufferSize? How to get the currentVideoTrackTimeUs? Not surprisingly, exiftool is not showing what I am trying to add (GPS location), although it does detect that there is something more embedded.
I am trying without luck to Google a working example around. Any clue?
This is what I tried so far:
...
outputFormat = codec.getOutputFormat();
audioTrackIdx = mux.addTrack(outputFormat);
MediaFormat metadataFormat = new MediaFormat();
metadataFormat.setString(MediaFormat.KEY_MIME, "application/gps");
int metadataTrackIndex = mux.addTrack(metadataFormat);
mux.start();
setM4Ametadata(mux,metadataTrackIndex,outBuffInfo);
...
private void setM4Ametadata(MediaMuxer muxer, int metadataTrackIndex, MediaCodec.BufferInfo info) {
ByteBuffer metaData = ByteBuffer.allocate(100);
metaData.putFloat(-22.9585325f);
metaData.putFloat(-43.2161615f);
MediaCodec.BufferInfo metaInfo = new MediaCodec.BufferInfo();
metaInfo.presentationTimeUs = info.presentationTimeUs;
metaInfo.offset = info.offset;
metaInfo.flags = info.flags;
metaInfo.size = 100;
muxer.writeSampleData(metadataTrackIndex, metaData, metaInfo);
muxer.writeSampleData(metadataTrackIndex, metaData, metaInfo);
}

getInpuBuffer in Android MediaCodec and MediaMuxer

Below is a fragment of Android MediaMuxer API sample code:
https://developer.android.com/reference/android/media/MediaMuxer.html
MediaMuxer muxer = new MediaMuxer("temp.mp4", OutputFormat.MUXER_OUTPUT_MPEG_4);
// More often, the MediaFormat will be retrieved from MediaCodec.getOutputFormat()
// or MediaExtractor.getTrackFormat().
MediaFormat audioFormat = new MediaFormat(...);
MediaFormat videoFormat = new MediaFormat(...);
int audioTrackIndex = muxer.addTrack(audioFormat);
int videoTrackIndex = muxer.addTrack(videoFormat);
ByteBuffer inputBuffer = ByteBuffer.allocate(bufferSize);
boolean finished = false;
BufferInfo bufferInfo = new BufferInfo();
muxer.start();
while(!finished) {
// getInputBuffer() will fill the inputBuffer with one frame of encoded
// sample from either MediaCodec or MediaExtractor, set isAudioSample to
// true when the sample is audio data, set up all the fields of bufferInfo,
// and return true if there are no more samples.
finished = getInputBuffer(inputBuffer, isAudioSample, bufferInfo);
if (!finished) {
int currentTrackIndex = isAudioSample ? audioTrackIndex : videoTrackIndex;
muxer.writeSampleData(currentTrackIndex, inputBuffer, bufferInfo);
}
};
muxer.stop();
muxer.release();
For this line: finished = getInputBuffer(inputBuffer, isAudioSample, bufferInfo); I didn't find this function getInputBuffer in both MediaCodec.java and MediaMuxer.java, is that a user defined function or API function?
In this case, getInputBuffer is a hypothetical user defined function. It is not an API function. The comment above it explains what it is supposed to do. (Note how it wouldn't actually work in the way it is written, since the isAudioSample variable can't be updated by the function in the way it is exactly written either.)

using Android MediaCodec for Adaptive Streaming

I have used MediaCodec for playing avc video files. now I am trying to play video from a stream, I couldn't find any example or good documentations regarding using MediaCodec for adaptive streaming. I was wondering if anyone can lead me to a good example or just post what I need to do?
some code:
...
codec = MediaCodec.createDecoderByType(type);
format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, type);
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, track.getInt("maxsize"));
format.setInteger(MediaFormat.KEY_WIDTH, videoWidth);
format.setInteger(MediaFormat.KEY_HEIGHT, videoHeight);
format.setInteger(MediaFormat.KEY_MAX_WIDTH, videoWidth);
format.setInteger(MediaFormat.KEY_MAX_HEIGHT, videoHeight);
...
mSurface = new Surface(mSurfaceTexture);
codec.configure(format, mSurface, null, 0);
codec.start();
...
Notice that I don't have csd-0 and csd1 at init time, I'd like to submit them after the codec is started. How can I do that?
now when I call
int index = codec.dequeueInputBuffer(timeout * 1000);
index is always -1.
Any help would be appreciated.

MediaCodec decoding AAC audio on Android - decodes but can't hear any output

I can successfully decode AAC using the NDK AMediaCodec API, but no audio is playing.
Here's my configuration:
AMediaFormat_setString( format, AMEDIAFORMAT_KEY_MIME, "audio/mp4a-latm" );
AMediaFormat_setInt32( format, AMEDIAFORMAT_KEY_CHANNEL_COUNT, 2 );
AMediaFormat_setInt32( format, AMEDIAFORMAT_KEY_SAMPLE_RATE, 44100 );
AMediaFormat_setInt32( format, AMEDIAFORMAT_KEY_IS_ADTS, 0 );
uint8_t es[2] = { 0x12, 0x12 };
AMediaFormat_setBuffer( format, "csd-0", es, 2 );
And here's what I do to decode:
ssize_t inputIndex = AMediaCodec_dequeueInputBuffer( decoder, kInputTimeout );
uint8_t* inputBuf = AMediaCodec_getInputBuffer( decoder, inputIndex, &inputSize );
// Copy AAC data into inputBuf ...
AMediaCodec_queueInputBuffer( decoder, inputIndex, 0, aacSize, pts, 0 );
ssize_t outputIndex = AMediaCodec_dequeueOutputBuffer( decoder, &outputBufferInfo, kOutputTimeout );
if( outputIndex >= 0 )
{
AMediaCodec_releaseOutputBuffer( decoder, outputIndex, true );
}
I'm not getting any errors, and outputBufferInfo.presentationTimeUs is being updated appropriately, so it seems to be decoding. However no audio is being output. Is it right that releaseOutputBuffer does this for audio? I've tried setting it's render boolean parameter to both true and false but I get silence for both.
Does MediaCodec output audio like it does video?
No, MediaCodec doesn't automatically render audio - the render parameter to releaseOutputBuffer doesn't do anything for audio. (See the Java documentation for the MediaCodec class for more explanations on these matters, that may be lacking in the NDK documentation.)
You need to manually take the decoded output buffer and feed it to either AudioTrack or OpenSL ES in order to play it back.

Unable to mux both audio and video

I'm writing an app that records screen capture and audio using MediaCodec. I use MediaMuxer to mux video and audio to create mp4 file. I successfuly managed to write video and audio separately, however when I try muxing them together live, the result is unexpected. Either audio is played without video, or video is played right after audio. My guess is that I'm doing something wrong with timestamps, but I can't figure out what exactly. I already looked at those examples: https://github.com/OnlyInAmerica/HWEncoderExperiments/tree/audiotest/HWEncoderExperiments/src/main/java/net/openwatch/hwencoderexperiments and the ones on bigflake.com and was not able to find the answer.
Here's my media formats configurations:
mVideoFormat = createMediaFormat();
private static MediaFormat createVideoFormat() {
MediaFormat format = MediaFormat.createVideoFormat(
Preferences.MIME_TYPE, mScreenWidth, mScreenHeight);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, Preferences.BIT_RATE);
format.setInteger(MediaFormat.KEY_FRAME_RATE, Preferences.FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL,
Preferences.IFRAME_INTERVAL);
return format;
}
mAudioFormat = createAudioFormat();
private static MediaFormat createAudioFormat() {
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64000);
return format;
}
Audio and video encoders, muxer:
mVideoEncoder = MediaCodec.createEncoderByType(Preferences.MIME_TYPE);
mVideoEncoder.configure(mVideoFormat, null, null,
MediaCodec.CONFIGURE_FLAG_ENCODE);
mInputSurface = new InputSurface(mVideoEncoder.createInputSurface(),
mSavedEglContext);
mVideoEncoder.start();
if (recordAudio){
audioBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mAudioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, audioBufferSize);
mAudioRecorder.startRecording();
mAudioEncoder = MediaCodec.createEncoderByType("audio/mp4a-latm");
mAudioEncoder.configure(mAudioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mAudioEncoder.start();
}
try {
String fileId = String.valueOf(System.currentTimeMillis());
mMuxer = new MediaMuxer(dir.getPath() + "/Video"
+ fileId + ".mp4",
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
} catch (IOException ioe) {
throw new RuntimeException("MediaMuxer creation failed", ioe);
}
mVideoTrackIndex = -1;
mAudioTrackIndex = -1;
mMuxerStarted = false;
I use this to set up video timestamps:
mInputSurface.setPresentationTime(mSurfaceTexture.getTimestamp());
drainVideoEncoder(false);
And this to set up audio time stamps:
lastQueuedPresentationTimeStampUs = getNextQueuedPresentationTimeStampUs();
if(endOfStream)
mAudioEncoder.queueInputBuffer(inputBufferIndex, 0, audioBuffer.length, lastQueuedPresentationTimeStampUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
else
mAudioEncoder.queueInputBuffer(inputBufferIndex, 0, audioBuffer.length, lastQueuedPresentationTimeStampUs, 0);
mAudioBufferInfo.presentationTimeUs = getNextDeQueuedPresentationTimeStampUs();
mMuxer.writeSampleData(mAudioTrackIndex, encodedData,
mAudioBufferInfo);
lastDequeuedPresentationTimeStampUs = mAudioBufferInfo.presentationTimeUs;
private static long getNextQueuedPresentationTimeStampUs(){
long nextQueuedPresentationTimeStampUs = (lastQueuedPresentationTimeStampUs > lastDequeuedPresentationTimeStampUs)
? (lastQueuedPresentationTimeStampUs + 1) : (lastDequeuedPresentationTimeStampUs + 1);
Log.i(TAG, "nextQueuedPresentationTimeStampUs: " + nextQueuedPresentationTimeStampUs);
return nextQueuedPresentationTimeStampUs;
}
private static long getNextDeQueuedPresentationTimeStampUs(){
Log.i(TAG, "nextDequeuedPresentationTimeStampUs: " + (lastDequeuedPresentationTimeStampUs + 1));
lastDequeuedPresentationTimeStampUs ++;
return lastDequeuedPresentationTimeStampUs;
}
I took it from this example https://github.com/OnlyInAmerica/HWEncoderExperiments/blob/audiotest/HWEncoderExperiments/src/main/java/net/openwatch/hwencoderexperiments/AudioEncodingTest.java in order to avoid "timestampUs XXX < lastTimestampUs XXX" error
Can someone help me figure out the problem, please?
It looks like you're using system-provided time stamps for video, but a simple counter for audio. Unless somehow the video timestamp is being used to seed the audio every frame and it's just not shown above.
For audio and video to play in sync, you need to have the same presentation time stamp on audio and video frames that are expected to be presented at the same time.
See also this related question.
I think the solution might be to just repeatedly read audio samples. You could check if a new video frame is available every N audio samples, and pass it to the muxer with the same timestamp as soon as a new video frame arrives.
int __buffer_offset = 0;
final int CHUNK_SIZE = 100; /* record 100 samples each iteration */
while (!__new_video_frame_available) {
this._audio_recorder.read(__recorded_data, __buffer_offset, CHUNK_SIZE);
__buffer_offset += CHUNK_SIZE;
}
I think that should work.
Kindest regards,
Wolfram

Categories

Resources