I have used MediaCodec for playing avc video files. now I am trying to play video from a stream, I couldn't find any example or good documentations regarding using MediaCodec for adaptive streaming. I was wondering if anyone can lead me to a good example or just post what I need to do?
some code:
...
codec = MediaCodec.createDecoderByType(type);
format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, type);
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, track.getInt("maxsize"));
format.setInteger(MediaFormat.KEY_WIDTH, videoWidth);
format.setInteger(MediaFormat.KEY_HEIGHT, videoHeight);
format.setInteger(MediaFormat.KEY_MAX_WIDTH, videoWidth);
format.setInteger(MediaFormat.KEY_MAX_HEIGHT, videoHeight);
...
mSurface = new Surface(mSurfaceTexture);
codec.configure(format, mSurface, null, 0);
codec.start();
...
Notice that I don't have csd-0 and csd1 at init time, I'd like to submit them after the codec is started. How can I do that?
now when I call
int index = codec.dequeueInputBuffer(timeout * 1000);
index is always -1.
Any help would be appreciated.
Related
I currently want to replace the audio of .mp4 video file, with another .mp3 audio file.if replacing the audio track of original video is not possible,Please give me solution for how to keep both the audio tracks and let the user to select the desired audio track while playing.
I tried using MediaMuxer and mediaExtractor still i couldnt find out the correct solution.Can anyone please help me.
In media muxer sample program https://developer.android.com/reference/android/media/MediaMuxer.html
MediaMuxer muxer = new MediaMuxer("temp.mp4", OutputFormat.MUXER_OUTPUT_MPEG_4);
// More often, the MediaFormat will be retrieved from MediaCodec.getOutputFormat()
// or MediaExtractor.getTrackFormat().
MediaFormat audioFormat = new MediaFormat(...);
MediaFormat videoFormat = new MediaFormat(...);
int audioTrackIndex = muxer.addTrack(audioFormat);
int videoTrackIndex = muxer.addTrack(videoFormat);
ByteBuffer inputBuffer = ByteBuffer.allocate(bufferSize);
boolean finished = false;
BufferInfo bufferInfo = new BufferInfo();
muxer.start();
while(!finished) {
// getInputBuffer() will fill the inputBuffer with one frame of encoded
// sample from either MediaCodec or MediaExtractor, set isAudioSample to
// true when the sample is audio data, set up all the fields of bufferInfo,
// and return true if there are no more samples.
finished = getInputBuffer(inputBuffer, isAudioSample, bufferInfo);
if (!finished) {
int currentTrackIndex = isAudioSample ? audioTrackIndex : videoTrackIndex;
muxer.writeSampleData(currentTrackIndex, inputBuffer, bufferInfo);
}
};
muxer.stop();
muxer.release();
i am using android API 23,i am getting error saying getInputBuffer and isAudioSample cannot be resolved.
MediaFormat audioFormat=new MediaFormat(...);
What should i write inside the paranthesis.Where should i mention my video and audio file.I searched a lot Please give me some solution to this problem
Currently you can't write anything within the parenthesis. You have to use MediaFormatstatic methods:
MediaFormat audioFormat = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 160000, 1);
MediaFormat videoFormat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_MPEG4, 1280, 720);
The values that I added here are random. You have to specify:
For the audio: the myme type of the resulting file, the bitrate and amount of channels of the resulting audio
For the video: the myme type of the resulting file, the heigth and width of the resulting video.
I can successfully decode AAC using the NDK AMediaCodec API, but no audio is playing.
Here's my configuration:
AMediaFormat_setString( format, AMEDIAFORMAT_KEY_MIME, "audio/mp4a-latm" );
AMediaFormat_setInt32( format, AMEDIAFORMAT_KEY_CHANNEL_COUNT, 2 );
AMediaFormat_setInt32( format, AMEDIAFORMAT_KEY_SAMPLE_RATE, 44100 );
AMediaFormat_setInt32( format, AMEDIAFORMAT_KEY_IS_ADTS, 0 );
uint8_t es[2] = { 0x12, 0x12 };
AMediaFormat_setBuffer( format, "csd-0", es, 2 );
And here's what I do to decode:
ssize_t inputIndex = AMediaCodec_dequeueInputBuffer( decoder, kInputTimeout );
uint8_t* inputBuf = AMediaCodec_getInputBuffer( decoder, inputIndex, &inputSize );
// Copy AAC data into inputBuf ...
AMediaCodec_queueInputBuffer( decoder, inputIndex, 0, aacSize, pts, 0 );
ssize_t outputIndex = AMediaCodec_dequeueOutputBuffer( decoder, &outputBufferInfo, kOutputTimeout );
if( outputIndex >= 0 )
{
AMediaCodec_releaseOutputBuffer( decoder, outputIndex, true );
}
I'm not getting any errors, and outputBufferInfo.presentationTimeUs is being updated appropriately, so it seems to be decoding. However no audio is being output. Is it right that releaseOutputBuffer does this for audio? I've tried setting it's render boolean parameter to both true and false but I get silence for both.
Does MediaCodec output audio like it does video?
No, MediaCodec doesn't automatically render audio - the render parameter to releaseOutputBuffer doesn't do anything for audio. (See the Java documentation for the MediaCodec class for more explanations on these matters, that may be lacking in the NDK documentation.)
You need to manually take the decoded output buffer and feed it to either AudioTrack or OpenSL ES in order to play it back.
I'm writing an app that records screen capture and audio using MediaCodec. I use MediaMuxer to mux video and audio to create mp4 file. I successfuly managed to write video and audio separately, however when I try muxing them together live, the result is unexpected. Either audio is played without video, or video is played right after audio. My guess is that I'm doing something wrong with timestamps, but I can't figure out what exactly. I already looked at those examples: https://github.com/OnlyInAmerica/HWEncoderExperiments/tree/audiotest/HWEncoderExperiments/src/main/java/net/openwatch/hwencoderexperiments and the ones on bigflake.com and was not able to find the answer.
Here's my media formats configurations:
mVideoFormat = createMediaFormat();
private static MediaFormat createVideoFormat() {
MediaFormat format = MediaFormat.createVideoFormat(
Preferences.MIME_TYPE, mScreenWidth, mScreenHeight);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, Preferences.BIT_RATE);
format.setInteger(MediaFormat.KEY_FRAME_RATE, Preferences.FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL,
Preferences.IFRAME_INTERVAL);
return format;
}
mAudioFormat = createAudioFormat();
private static MediaFormat createAudioFormat() {
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64000);
return format;
}
Audio and video encoders, muxer:
mVideoEncoder = MediaCodec.createEncoderByType(Preferences.MIME_TYPE);
mVideoEncoder.configure(mVideoFormat, null, null,
MediaCodec.CONFIGURE_FLAG_ENCODE);
mInputSurface = new InputSurface(mVideoEncoder.createInputSurface(),
mSavedEglContext);
mVideoEncoder.start();
if (recordAudio){
audioBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mAudioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, audioBufferSize);
mAudioRecorder.startRecording();
mAudioEncoder = MediaCodec.createEncoderByType("audio/mp4a-latm");
mAudioEncoder.configure(mAudioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mAudioEncoder.start();
}
try {
String fileId = String.valueOf(System.currentTimeMillis());
mMuxer = new MediaMuxer(dir.getPath() + "/Video"
+ fileId + ".mp4",
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
} catch (IOException ioe) {
throw new RuntimeException("MediaMuxer creation failed", ioe);
}
mVideoTrackIndex = -1;
mAudioTrackIndex = -1;
mMuxerStarted = false;
I use this to set up video timestamps:
mInputSurface.setPresentationTime(mSurfaceTexture.getTimestamp());
drainVideoEncoder(false);
And this to set up audio time stamps:
lastQueuedPresentationTimeStampUs = getNextQueuedPresentationTimeStampUs();
if(endOfStream)
mAudioEncoder.queueInputBuffer(inputBufferIndex, 0, audioBuffer.length, lastQueuedPresentationTimeStampUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
else
mAudioEncoder.queueInputBuffer(inputBufferIndex, 0, audioBuffer.length, lastQueuedPresentationTimeStampUs, 0);
mAudioBufferInfo.presentationTimeUs = getNextDeQueuedPresentationTimeStampUs();
mMuxer.writeSampleData(mAudioTrackIndex, encodedData,
mAudioBufferInfo);
lastDequeuedPresentationTimeStampUs = mAudioBufferInfo.presentationTimeUs;
private static long getNextQueuedPresentationTimeStampUs(){
long nextQueuedPresentationTimeStampUs = (lastQueuedPresentationTimeStampUs > lastDequeuedPresentationTimeStampUs)
? (lastQueuedPresentationTimeStampUs + 1) : (lastDequeuedPresentationTimeStampUs + 1);
Log.i(TAG, "nextQueuedPresentationTimeStampUs: " + nextQueuedPresentationTimeStampUs);
return nextQueuedPresentationTimeStampUs;
}
private static long getNextDeQueuedPresentationTimeStampUs(){
Log.i(TAG, "nextDequeuedPresentationTimeStampUs: " + (lastDequeuedPresentationTimeStampUs + 1));
lastDequeuedPresentationTimeStampUs ++;
return lastDequeuedPresentationTimeStampUs;
}
I took it from this example https://github.com/OnlyInAmerica/HWEncoderExperiments/blob/audiotest/HWEncoderExperiments/src/main/java/net/openwatch/hwencoderexperiments/AudioEncodingTest.java in order to avoid "timestampUs XXX < lastTimestampUs XXX" error
Can someone help me figure out the problem, please?
It looks like you're using system-provided time stamps for video, but a simple counter for audio. Unless somehow the video timestamp is being used to seed the audio every frame and it's just not shown above.
For audio and video to play in sync, you need to have the same presentation time stamp on audio and video frames that are expected to be presented at the same time.
See also this related question.
I think the solution might be to just repeatedly read audio samples. You could check if a new video frame is available every N audio samples, and pass it to the muxer with the same timestamp as soon as a new video frame arrives.
int __buffer_offset = 0;
final int CHUNK_SIZE = 100; /* record 100 samples each iteration */
while (!__new_video_frame_available) {
this._audio_recorder.read(__recorded_data, __buffer_offset, CHUNK_SIZE);
__buffer_offset += CHUNK_SIZE;
}
I think that should work.
Kindest regards,
Wolfram
I am trying to use MediaCodec to save a series of Images, saved as Byte Arrays in a file, to a video file. I have tested these images on a SurfaceView (playing them in series) and I can see them fine. I have looked at many examples using MediaCodec, and here is what I understand (please correct me if I am wrong):
Get InputBuffers from MediaCodec object -> fill it with your frame's
image data -> queue the input buffer -> get coded output buffer ->
write it to a file -> increase presentation time and repeat
However, I have tested this a lot and I end up with one of two cases:
All sample projects I tried to imitate have caused Media server to die when calling queueInputBuffer for the second time.
I tried calling codec.flush() at the end (after saving output buffer to file, although none of the examples I saw did this) and the media server did not die, however, I am not able to open the output video file with any media player, so something is wrong.
Here is my code:
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE);
MediaFormat mediaFormat = null;
if(CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_720P)){
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 1280 , 720);
} else {
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 720, 480);
}
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 700000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 10);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
codec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
boolean sawInputEOS = false;
int inputBufferIndex=-1,outputBufferIndex=-1;
BufferInfo info=null;
//loop to read YUV byte array from file
inputBufferIndex = codec.dequeueInputBuffer(WAITTIME);
if(bytesread<=0)sawInputEOS=true;
if(inputBufferIndex >= 0){
if(!sawInputEOS){
int samplesiz=dat.length;
inputBuffers[inputBufferIndex].put(dat);
codec.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, 0);
presentationTime += 100;
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
Log.i("BATA", "outputBufferIndex="+outputBufferIndex);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
if(sawInputEOS) break;
}
}else{
codec.queueInputBuffer(inputBufferIndex, 0, 0, presentationTime, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
break;
}
}
}
}
codec.flush();
try {
fstream2.close();
dos.flush();
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
codec.stop();
codec.release();
codec = null;
return true;
}
My question is, how can I get a working video from a stream of images using MediaCodec. What am I doing wrong?
Another question (if I am not too greedy), I would like to add an Audio track to this video, can it be done with MediaCodec as well, or must I use FFmpeg?
Note: I know about MediaMux in Android 4.3, however, it is not an option for me as my App must work on Android 4.1+.
Update
Thanks to fadden answer, I was able to reach EOS without Media server dying (Above code is after modification). However, the file I am getting is producing gibberish. Here is a snapshot of the video I get (only works as .h264 file).
My Input image format is YUV image (NV21 from camera preview). I can't get it to be any playable format. I tried all COLOR_FormatYUV420 formats and same gibberish output. And I still can't find away (using MediaCodec) to add audio.
I think you have the right general idea. Some things to be aware of:
Not all devices support COLOR_FormatYUV420SemiPlanar. Some only accept planar. (Android 4.3 introduced CTS tests to ensure that the AVC codec supports one or the other.)
It's not the case that queueing an input buffer will immediately result in the generation of one output buffer. Some codecs may accumulate several frames of input before producing output, and may produce output after your input has finished. Make sure your loops take that into account (e.g. your inputBuffers[].clear() will blow up if it's still -1).
Don't try to submit data and send EOS with the same queueInputBuffer call. The data in that frame may be discarded. Always send EOS with a zero-length buffer.
The output of the codecs is generally pretty "raw", e.g. the AVC codec emits an H.264 elementary stream rather than a "cooked" .mp4 file. Many players won't accept this format. If you can't rely on the presence of MediaMuxer you will need to find another way to cook the data (search around on stackoverflow for ideas).
It's certainly not expected that the mediaserver process would crash.
You can find some examples and links to the 4.3 CTS tests here.
Update: As of Android 4.3, MediaCodec and Camera have no ByteBuffer formats in common, so at the very least you will need to fiddle with the chroma planes. However, that sort of problem manifests very differently (as shown in the images for this question).
The image you added looks like video, but with stride and/or alignment issues. Make sure your pixels are laid out correctly. In the CTS EncodeDecodeTest, the generateFrame() method (line 906) shows how to encode both planar and semi-planar YUV420 for MediaCodec.
The easiest way to avoid the format issues is to move the frames through a Surface (like the CameraToMpegTest sample), but unfortunately that's not possible in Android 4.1.
I am using JB's Hardware Media Codec . I am trying to encode a video and decode it and display using the codecs (in video/avc format)...
I am using two buttons to "start" and "stop" the video rendering. The first time, when I render the video it is displayed correctly. When I start the video second time, it is not getting displayed and throws the following error:
"NOT in AVI Mode"
I copy paste the code snippets for the start and Stop button.
public void Stop(){
try {
//stopping the decoder alone
decoderMediaCodec.flush();
decoderMediaCodec.stop();
decoderMediaCodec.release();
//Tried with various combination of flush(), stop() and release();
} catch (Exception e) {
e.printStackTrace();
}
public void Start(Surface view){
try {
decoderMediaCodec = MediaCodec.createDecoderByType(mime);//Initialize the decoder again
MediaFormat format = MediaFormat.createVideoFormat(mime, mWidth, mHeight);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
format.setInteger(MediaFormat.KEY_BIT_RATE, bitrate);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
format.setInteger(MediaFormat.KEY_FRAME_RATE, framerate);
decoderMediaCodec.configure(format, view, null, 0);
decoderMediaCodec.start();
} catch (Exception e) {
e.printStackTrace();
}
}
Kindly help me with the video rendering.
Note : data received in decoder is valid... data is checked using the beyond compare tool
I am getting -1 for outputBufferIndex
int outputBufferIndex = decoderMediaCodec.dequeueOutputBuffer(bufferInfo, 0);
In logs i get
E/( 271):
E/( 271):not in avi mode
E/( 271):
E/( 271): not in avi mode
It would be good if you could share more logs when you encounter the issue. From the description of your issue, could you confirm that the Surface being passed for your 2nd Start call is a valid handle?
If you can rebuild android, probably enabling log traces in Mediacodec.cpp would be helpful, specifically Mediacodec::setNativeWindow method.
P.S: For a decoder, why are I-frame interval, bitrate and framerate being set?