Adding metadata to a m4a/mp4 AAC container - android

I am encoding my PCM audio recorded data into AAC inside a m4a (mp4) container. It works fine, and now I want to add some metadata.
The ever-quoted documentation for this is this one. Yet, I do not understand this code: what is the condition for the while? What is KEY_MIME? How to set bufferSize? How to get the currentVideoTrackTimeUs? Not surprisingly, exiftool is not showing what I am trying to add (GPS location), although it does detect that there is something more embedded.
I am trying without luck to Google a working example around. Any clue?
This is what I tried so far:
...
outputFormat = codec.getOutputFormat();
audioTrackIdx = mux.addTrack(outputFormat);
MediaFormat metadataFormat = new MediaFormat();
metadataFormat.setString(MediaFormat.KEY_MIME, "application/gps");
int metadataTrackIndex = mux.addTrack(metadataFormat);
mux.start();
setM4Ametadata(mux,metadataTrackIndex,outBuffInfo);
...
private void setM4Ametadata(MediaMuxer muxer, int metadataTrackIndex, MediaCodec.BufferInfo info) {
ByteBuffer metaData = ByteBuffer.allocate(100);
metaData.putFloat(-22.9585325f);
metaData.putFloat(-43.2161615f);
MediaCodec.BufferInfo metaInfo = new MediaCodec.BufferInfo();
metaInfo.presentationTimeUs = info.presentationTimeUs;
metaInfo.offset = info.offset;
metaInfo.flags = info.flags;
metaInfo.size = 100;
muxer.writeSampleData(metadataTrackIndex, metaData, metaInfo);
muxer.writeSampleData(metadataTrackIndex, metaData, metaInfo);
}

Related

Android encoded stream (H264/AAC) audio does not play in Flash Player

I'm working on adding a live broadcasting feature to an Android app. I do so through RTMP and make use of the DailyMotion Android SDK, which in turn makes use of Kickflip.
Everything works perfect, except for the playback of the audio on the website (which makes use of Flash). The audio does work in VLC, so it seems to be an issue with Flash being unable to decode the AAC audio.
For the audio I instantiate an encoder with the "audio/mp4a-latm" mime type. The Android developer docs state the following about this mime type: "audio/mp4a-latm" - AAC audio (note, this is raw AAC packets, not packaged in LATM!). I expect that my problem lies here, but yet I have not been able to find a solution for it.
Pretty much all my research, including this SO question about the matter pointed me in the direction of adding an ADTS header to the audio byte array. That results in the following code in the writeSampleData method:
boolean isHeader = false;
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
isHeader = true;
} else {
pts = bufferInfo.presentationTimeUs - mFirstPts;
}
if (mFirstPts != -1 && pts >= 0) {
pts /= 1000;
byte data[] = new byte[bufferInfo.size + 7];
addADTStoPacket(data, bufferInfo.size + 7);
encodedData.position(bufferInfo.offset);
encodedData.get(data, 7, bufferInfo.size);
addDataPacket(new AudioPacket(data, isHeader, pts, mAudioFirstByte));
}
The addADTStoPacket method is identical to the one in the above mentioned SO post, but I will show it here regardless:
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 1; //CPE
// fill in ADTS data
packet[0] = (byte)0xFF;
packet[1] = (byte)0xF9;
packet[2] = (byte)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (byte)(((chanCfg&3)<<6) + (packetLen>>11));
packet[4] = (byte)((packetLen&0x7FF) >> 3);
packet[5] = (byte)(((packetLen&7)<<5) + 0x1F)
packet[6] = (byte)0xFC;
}
The variables in the above method match the settings I have configured in the application, so I'm pretty sure that's fine.
The data is written to the output stream in the following method of the AudioPacket class:
#Override
public void writePayload(OutputStream outputStream) throws IOException {
outputStream.write(mFirstByte);
outputStream.write(mIsAudioSpecificConfic ? 0 : 1);
outputStream.write(mData);
}
Am I missing something here? I could present more code if necessary, but I think this covers the most related parts. Thanks in advance and I really hope someone is able to help, I've been stuck for a couple of days now...

Replacing audio track of .mp4 file

I currently want to replace the audio of .mp4 video file, with another .mp3 audio file.if replacing the audio track of original video is not possible,Please give me solution for how to keep both the audio tracks and let the user to select the desired audio track while playing.
I tried using MediaMuxer and mediaExtractor still i couldnt find out the correct solution.Can anyone please help me.
In media muxer sample program https://developer.android.com/reference/android/media/MediaMuxer.html
MediaMuxer muxer = new MediaMuxer("temp.mp4", OutputFormat.MUXER_OUTPUT_MPEG_4);
// More often, the MediaFormat will be retrieved from MediaCodec.getOutputFormat()
// or MediaExtractor.getTrackFormat().
MediaFormat audioFormat = new MediaFormat(...);
MediaFormat videoFormat = new MediaFormat(...);
int audioTrackIndex = muxer.addTrack(audioFormat);
int videoTrackIndex = muxer.addTrack(videoFormat);
ByteBuffer inputBuffer = ByteBuffer.allocate(bufferSize);
boolean finished = false;
BufferInfo bufferInfo = new BufferInfo();
muxer.start();
while(!finished) {
// getInputBuffer() will fill the inputBuffer with one frame of encoded
// sample from either MediaCodec or MediaExtractor, set isAudioSample to
// true when the sample is audio data, set up all the fields of bufferInfo,
// and return true if there are no more samples.
finished = getInputBuffer(inputBuffer, isAudioSample, bufferInfo);
if (!finished) {
int currentTrackIndex = isAudioSample ? audioTrackIndex : videoTrackIndex;
muxer.writeSampleData(currentTrackIndex, inputBuffer, bufferInfo);
}
};
muxer.stop();
muxer.release();
i am using android API 23,i am getting error saying getInputBuffer and isAudioSample cannot be resolved.
MediaFormat audioFormat=new MediaFormat(...);
What should i write inside the paranthesis.Where should i mention my video and audio file.I searched a lot Please give me some solution to this problem
Currently you can't write anything within the parenthesis. You have to use MediaFormatstatic methods:
MediaFormat audioFormat = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 160000, 1);
MediaFormat videoFormat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_MPEG4, 1280, 720);
The values that I added here are random. You have to specify:
For the audio: the myme type of the resulting file, the bitrate and amount of channels of the resulting audio
For the video: the myme type of the resulting file, the heigth and width of the resulting video.

Unable to mux both audio and video

I'm writing an app that records screen capture and audio using MediaCodec. I use MediaMuxer to mux video and audio to create mp4 file. I successfuly managed to write video and audio separately, however when I try muxing them together live, the result is unexpected. Either audio is played without video, or video is played right after audio. My guess is that I'm doing something wrong with timestamps, but I can't figure out what exactly. I already looked at those examples: https://github.com/OnlyInAmerica/HWEncoderExperiments/tree/audiotest/HWEncoderExperiments/src/main/java/net/openwatch/hwencoderexperiments and the ones on bigflake.com and was not able to find the answer.
Here's my media formats configurations:
mVideoFormat = createMediaFormat();
private static MediaFormat createVideoFormat() {
MediaFormat format = MediaFormat.createVideoFormat(
Preferences.MIME_TYPE, mScreenWidth, mScreenHeight);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, Preferences.BIT_RATE);
format.setInteger(MediaFormat.KEY_FRAME_RATE, Preferences.FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL,
Preferences.IFRAME_INTERVAL);
return format;
}
mAudioFormat = createAudioFormat();
private static MediaFormat createAudioFormat() {
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64000);
return format;
}
Audio and video encoders, muxer:
mVideoEncoder = MediaCodec.createEncoderByType(Preferences.MIME_TYPE);
mVideoEncoder.configure(mVideoFormat, null, null,
MediaCodec.CONFIGURE_FLAG_ENCODE);
mInputSurface = new InputSurface(mVideoEncoder.createInputSurface(),
mSavedEglContext);
mVideoEncoder.start();
if (recordAudio){
audioBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mAudioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, audioBufferSize);
mAudioRecorder.startRecording();
mAudioEncoder = MediaCodec.createEncoderByType("audio/mp4a-latm");
mAudioEncoder.configure(mAudioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mAudioEncoder.start();
}
try {
String fileId = String.valueOf(System.currentTimeMillis());
mMuxer = new MediaMuxer(dir.getPath() + "/Video"
+ fileId + ".mp4",
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
} catch (IOException ioe) {
throw new RuntimeException("MediaMuxer creation failed", ioe);
}
mVideoTrackIndex = -1;
mAudioTrackIndex = -1;
mMuxerStarted = false;
I use this to set up video timestamps:
mInputSurface.setPresentationTime(mSurfaceTexture.getTimestamp());
drainVideoEncoder(false);
And this to set up audio time stamps:
lastQueuedPresentationTimeStampUs = getNextQueuedPresentationTimeStampUs();
if(endOfStream)
mAudioEncoder.queueInputBuffer(inputBufferIndex, 0, audioBuffer.length, lastQueuedPresentationTimeStampUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
else
mAudioEncoder.queueInputBuffer(inputBufferIndex, 0, audioBuffer.length, lastQueuedPresentationTimeStampUs, 0);
mAudioBufferInfo.presentationTimeUs = getNextDeQueuedPresentationTimeStampUs();
mMuxer.writeSampleData(mAudioTrackIndex, encodedData,
mAudioBufferInfo);
lastDequeuedPresentationTimeStampUs = mAudioBufferInfo.presentationTimeUs;
private static long getNextQueuedPresentationTimeStampUs(){
long nextQueuedPresentationTimeStampUs = (lastQueuedPresentationTimeStampUs > lastDequeuedPresentationTimeStampUs)
? (lastQueuedPresentationTimeStampUs + 1) : (lastDequeuedPresentationTimeStampUs + 1);
Log.i(TAG, "nextQueuedPresentationTimeStampUs: " + nextQueuedPresentationTimeStampUs);
return nextQueuedPresentationTimeStampUs;
}
private static long getNextDeQueuedPresentationTimeStampUs(){
Log.i(TAG, "nextDequeuedPresentationTimeStampUs: " + (lastDequeuedPresentationTimeStampUs + 1));
lastDequeuedPresentationTimeStampUs ++;
return lastDequeuedPresentationTimeStampUs;
}
I took it from this example https://github.com/OnlyInAmerica/HWEncoderExperiments/blob/audiotest/HWEncoderExperiments/src/main/java/net/openwatch/hwencoderexperiments/AudioEncodingTest.java in order to avoid "timestampUs XXX < lastTimestampUs XXX" error
Can someone help me figure out the problem, please?
It looks like you're using system-provided time stamps for video, but a simple counter for audio. Unless somehow the video timestamp is being used to seed the audio every frame and it's just not shown above.
For audio and video to play in sync, you need to have the same presentation time stamp on audio and video frames that are expected to be presented at the same time.
See also this related question.
I think the solution might be to just repeatedly read audio samples. You could check if a new video frame is available every N audio samples, and pass it to the muxer with the same timestamp as soon as a new video frame arrives.
int __buffer_offset = 0;
final int CHUNK_SIZE = 100; /* record 100 samples each iteration */
while (!__new_video_frame_available) {
this._audio_recorder.read(__recorded_data, __buffer_offset, CHUNK_SIZE);
__buffer_offset += CHUNK_SIZE;
}
I think that should work.
Kindest regards,
Wolfram

Audio Recording in Stereo giving same data in Left and Right channels

I am trying to record and process audio data based on differences in what gets recorded in the left and right channel. For this I am using Audio Record class, with MIC as input and STEREO mode.
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
My issue is that I get exactly the same data in both the channels. (alternate samples are separated to get individual channel inputs). Please help. I am not sure why this is happening.
Using this configuration:
private int audioSource = MediaRecorder.AudioSource.MIC;
private static int sampleRateInHz = 48000;
private static int channelConfig = AudioFormat.CHANNEL_IN_STEREO;
private static int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
The data in the audio data is as follows.
leftChannel data: [0,1],[4,5]...
rightChannel data: [2,3],[6,7]...
So you need to seperate the data.
readSize = audioRecord.read(audioShortData, 0, bufferSizeInBytes);
for(int i = 0; i < readSize/2; i = i + 2)
{
leftChannelAudioData[i] = audiodata[2*i];
leftChannelAudioData[i+1] = audiodata[2*i+1];
rightChannelAudioData[i] = audiodata[2*i+2];
rightChannelAudioData[i+1] = audiodata[2*i+3];
}
Hope this helpful.
Here is a working example for capturing audio in stereo (tested with Samsung Galaxy S3 4.4.2 SlimKat):
private void startRecording() {
String filename = Environment.getExternalStorageDirectory().getPath()+"/SoundRecords/"+System.currentTimeMillis()+".aac";
File record = new File(filename);
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(128000);
recorder.setAudioSamplingRate(96000);
recorder.setAudioChannels(2);
recorder.setOutputFile(filename);
t_filename.setText(record.getName());
try {
recorder.prepare();
recorder.start();
} catch (IOException e) {
e.printStackTrace();
}
}
If your phone supports stereo capturing, then this should work :)
You cannot obtain a stereo input in this way on your device.
Although the Nexus 4 has two microphones, they are not intended for stereo recording, but instead are likely for background noise cancellation.
See https://groups.google.com/forum/#!topic/android-platform/SptXI964eEI where various low-level modifications of the audio system are discussed in an attempt to accomplish stereo recording.

How to encode non-camera video in Android

I am working on an android application in which a video is dynamically generated by compositing a sequence of animation frames. I tried to use the Android Media Recorder API for this but have not found a way to get it to accept a non-camera source as input. I have been attempting to use a FFMPEG port (based on the Rockplayer build) but am running into difficulties with missing functions since I am using it as an encoder, not a decoder.
The iPhone version of this app uses AVAssetWriter from the AVFoundation framework.
Is there an easier way to do this or am I stuck slugging it out with FFMPEG?
This may help (see the note on resolution though):-
How to encode using the FFMpeg in Android (using H263)
I'm not sure if they did a custom build of ffmpeg, or not, if so they may be able to offer advice on porting a more feature complete version.
-Anthony
Opencv has ViewBase class which takes the input from the camera as a frame and represent the frame as a bitmap , you can extand the class View base and make it for your own use , even though installing opencv on the android isn't very easy.
When you extend SampleCvViewBase you will have the following function which you can use pretty much hard work but the best I can think of.
#Override
protected Bitmap processFrame(VideoCapture capture) {
capture.retrieve(picture, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
if (Utils.matToBitmap(picture, bmp))
return bmp;
bmp.recycle();
return null;
}
You can use a pure Java open source library called JCodec ( http://jcodec.org ).
It contains a simple yet working H.264 encoder and MP4 muxer. The class below uses JCodec low level API and should be what you need ( CORRECTED ):
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++)
Arrays.fill(toEncode.getData()[i], 0);
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
SequenceEncoder encoder = new SequenceEncoder(new File("video.mp4"));
for (int i = 1; i < 100; i++) {
BufferedImage bi = ImageIO.read(new File(String.format("folder/img%08d.png", i)));
encoder.encodeImage(bi);
}
encoder.finish();
}
}
You can get JCodec jar from a project web-site.

Categories

Resources