Mux video with my own audio PCM track - android

Using Android MediaMuxer, what would be a decent way to add my own PCM track as the audio track in the final movie?
In a movie, at a certain time, I'm slowing down, stop, then accelerate and restart a video. For the video part, it's easy to directly affect the presentation time, but for audio, there is a chunk-by-chunk process that makes less intuitive to handle a slow down, a stop and a start in the audio track.
Currently, when iterating through the buffer I've received from the source, to slow down the whole track I do:
// Multiply by 3 the presentation time.
audioEncoderOutputBufferInfo.PresentationTimeUs =
audioEncoderOutputBufferInfo.PresentationTimeUs * ratio);
// I expand the sample by 3. Damn, just realized I haven't
// respected the sample alignment but anyway, the problem is not about white noise...
encoderOutputBuffer = Slowdown(encoderOutputBuffer, 3);
// I then write it in the muxer
muxer.WriteSampleData(outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
But this just doesn't play. Of course, if the MediaFormat from the source was copied to the destination, then it will have a 3 times shorter duration than the actual audio data.
Could I just take the whole PCM from an input, edit the byte[] array, and add it as a track to the MediaMuxer?

If you want to slow down your audio samples you need to do this before you encode them, so before you queue the input buffer of your audio codec.
From my experience, the audio presentation timestamps are ignored by most of the players out there (I tried it with VLC and ffplay). If you want to make sure that audio and video stay in sync, you must make sure that you actually have enough audio samples to fill in the gap between to pts, otherwise the player will just start to play the following samples regardless of their pts.
Furthermore you cannot just mux PCM samples using the MediaMuxer, you need to encode them first.

Related

Playing a music backwards using MediaExtractor

I want to create an app playing music backwards that uses the Android MediaExtractor to decode audio and connect backwards
As I can not get the MediaExtractor to move it backwards, I tried to cut it up to a certain byte size from end, flip it, play it, and cut next end and play so on.
In order to do this, we need to use MediaExtractor.seekTo to rewind forward, but the position to rewind is ambiguous. I tried to seekTo calculated Micro Seconds with sample rate and the certain byte size, but when using MediaExtractor.SEEK_TO_CLOSEST_SYNC flag, the connection part is slightly shifted and noise occurs.
So I think this problem can be solved if I can know the location of the Byte being sync, or if I can seekTo in bytes unit.
Any help would be appreciated.
since you want it in reverse couldn't you use SEEK_TO_PREVIOUS_SYNC for a better match instead of SEEK_TO_CLOSEST_SYNC

Android MediaCodec How to Frame Accurately Trim Audio

I am building the capability to frame-accurately trim video files on Android. Transcoding is implemented with MediaExtractor, MediaCodec, and MediaMuxer. I need help truncating arbitrary Audio frames in order to match their Video frame counterparts.
I believe the Audio frames must be trimmed in the Decoder output buffer, which is the logical place in which uncompressed audio data is available for editing.
For in/out trims I am calculating the necessary offset and size adjustments to the raw Audio buffer to shoehorn it into the available endcap frames, and I am submitting the data with the following code:
MediaCodec.BufferInfo info = pendingAudioDecoderOutputBufferInfos.poll();
...
ByteBuffer decoderOutputBuffer = audioDecoder.getOutputBuffer(decoderIndex).duplicate();
decoderOutputBuffer.position(info.offset);
decoderOutputBuffer.limit(info.offset + info.size);
encoderInputBuffer.position(0);
encoderInputBuffer.put(decoderOutputBuffer);
info.flags |= MediaCodec.BUFFER_FLAG_END_OF_STREAM;
audioEncoder.queueInputBuffer(encoderIndex, info.offset, info.size, presentationTime, info.flags);
audioDecoder.releaseOutputBuffer(decoderIndex, false);
My problem is that the data adjustments appear to affect only the data copied onto the output audio buffer, but not to shorten the audio frame that gets written into the MediaMuxer. The output video either ends up with several milli-seconds of missing audio at the end of the clip, or if I write too much data the audio frame gets dropped completely from the end of the clip.
How to properly trim an Audio Frame?
There's a few things at play here:
As Dave pointed out, you should pass 0 instead of info.offset to audioEncoder.queueInputBuffer - you already took the offset of the decoder output buffer into account when you set the buffer position with decoderOutputBuffer.position(info.offset);. But perhaps you update it somehow already.
I'm not sure if MediaCodec audio encoders allow you to pass audio data in arbitrary sized chunks, or it you need to send it exactly full audio frames at a time. I think it might accept it though - then you're fine. If not, you need to buffer the audio up yourself and pass it to the encoder once you have a full frame (in case you trimmed out some at the start)
Keep in mind that audio also is frame based (for AAC, it's 1024 samples frames unless you use the low delay variants or HE-AAC), so for 44 kHz, you can have audio duration only with a 23 ms granularity. If you want your audio to end precisely after the right amount of samples, you need to use container signaling to indicate this. I'm not sure if the MediaCodec audio encoder flushes whatever half frame you have at the end, or if you manually need to pass it extra zeros at the end in order to get the last few samples, if you aren't aligned to the frame size. It might not be needed though.
Encoding AAC audio does introduce some delay into the audio stream; after decoding, you'll have a number of priming samples at the start of the decoded stream (the exact number of these depends on the encoder - for the software encoder in Android for AAC-LC, it's probably 2048 samples, but it might also vary). For the case of 2048 samples, it exactly lines up with 2 frames of audio, but it can also be something that isn't a whole number of frames. I don't think MediaCodec signals the exact amount of delay either. If you drop the 2 first output packets from the encoder (in case the delay is 2048 samples), you'll avoid the extra delay, but the actual decoded audio for the first few frames won't be exactly right. (The priming packets are necessary to be able to properly represent whatever samples your stream starts with, otherwise it will more or less converge towards your intended audio within 2048 samples.)

Android MediaCodec API video plays too fast

I'm currently working with Android Jelly Bean MediaCodec API to develop a simple video player.
I extract tracks, play audio and video in separate threads. The problem is that video track always is played too fast.
Where can be the problem hidden?
Both audio and video are treated almost the same way, except audio is played via AudioTrack and video is rendered to the surface.
If you render frames at maximum speed you'll hit 60fps on most devices. You need to pace it according to the presentation time stamps provided by the encoder.
For example, if the input is a format supported by Android (e.g. a typical .mp4 file), you can use the MediaExtractor class to extract each frame. The time stamp can be retrieved with getSampleTime(). You want to delay rendering by the difference between timestamps on consecutive frames -- don't assume that the first frame will have a timestamp of zero.
Also, don't assume that video frames appear at a constant rate (e.g. 30fps). For some sources the frames will arrive unevenly.
See the "Play video (TextureView)" example in Grafika, particularly the SpeedControlCallback class. The gen-eight-rects.mp4 video uses variable frame durations to exercise it. If you check the "Play at 60fps" box, the presentation time stamps are ignored.

Delay in streaming audio

I am trying to stream audio through a server. I have set up everything, and it's working fine with recording and playing back static audio, but when I am trying to stream an audio there is a delay on the playing side.
I did a Google search, but couldn't find the proper way of doing this. I am using AudioRecord & the Audiotrack Android media API for sending & receiving audio data. Can anybody tell me how to handle this delay?
I have added my code on GOOGLE GROUP to get clear picture.
I had tried in this way, holding 5 chunks of audio data in a buffer which comes through the server & playing back when it fills 5 chunks of data and again getting next 5 chunks of audio data and filling it like that it goes till 1024 bytes of data (it writes to the audiotrack & the play method is called).This too has a delay,any other solutions??
If you're really trying to do this unbuffered, make sure whatever playback tool you're using is trying to play it back without a buffer. You will be hard-pressed to not have a delay. Nothing on TV, radio, etc. is really 'live'--there is always some kind of delay. With internet streams, you're sending a large amount of data constantly. Even besides the time for it to travel, all this data has to be kept in a particular order and nobody wants choppy playback while the enduser's computer attempts playback. I've had flash players for major networks keep massive cache files on my computer while it's handling playback, but their players do not skip/wait to buffer/etc. (If you load up something and notice a few 100 MBs of extra memory being used, maybe even more during playback, that's what that is.)
You might be able to get away with a very small buffer (the standard in the past used to be 30-60 seconds and a lot of players still default to this) using VLC. I have been able to set its buffer very low but it is on incredibly low quality streams/videos. The big problem you have though I'd guess is your playback is setting the buffer and if your playback is set to 60 seconds buffer, it doesn't matter what you do serverside...the client end will wait until it has that much of a chunk and then begin playback.

Is it possible to record video from the camera through a buffer into a file?

In particular, this is the setup I want: The built-in camera on an Android device is recording live video. This video is being saved to a buffer, which holds a few seconds of video. When the buffer is full, the oldest frames from the buffer are added to a video file on disk to make room for the new frames coming from the camera. The data in the buffer could then be used to skip backwards briefly in the video.
I was thinking we'd use mediaRecorder (and mediaPlayer?) with a Surface to obtain the video, respectively, use addCallbackBuffer to create the buffer, and then use setPreviewCallbackWithBuffer to display the video from the buffer. I'm not entirely sure where to go from there or if that is the wrong track altogether; I am somewhat of a novice at this.
MediaRecord is already doing buffering and file writing for you.
If you do need control the details, have an eye on : http://code.google.com/p/android-video-conference/source/browse/trunk/trunk/Android/upStream/src/my/video/stream/Stream.java

Categories

Resources