Add second audio track to MediaMuxer - android

I have a video file .mp4 - video track only.
I'm using MediaExtractor and MediaMuxer to add audio file.
this Works good.
On the processed file i want to add another audio track.
So i'm using again MediaExtractor and MediaMuxer to kind of copy the file, (Creating video and audio tracks, reading [extractor] and writing [muxer]). In addition i'm trying to add the second audio track to the muxer. but this throws the error Failed to add the track to the muxer.
in this link we can see that muxer does not support multiple tracks.
Code From the link:
// Throws exception b/c 2 audio tracks were added.
muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
muxer.addTrack(MediaFormat.createAudioFormat("audio/mp4a-latm", 48000, 1));
try {
muxer.addTrack(MediaFormat.createAudioFormat("audio/mp4a-latm", 48000, 1));
fail("should throw IllegalStateException.");
} catch (IllegalStateException e) {
// expected
}
Is there other way to do it ?
Elegant way ?
BTW, i'm trying to avoid using 3rd parties - like ffmpeg or so.. But if would be my only solution...
--EDIT--
Relevant piece of my code
MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(videoAndAudioFile);
for (int currTrackIdx = 0; currTrackIdx < extractor.getTrackCount(); currTrackIdx++) {
MediaFormat trackFormat = extractor.getTrackFormat(currTrackIdx);
tracksIdx.add(muxer.addTrack(trackFormat));
}
MediaExtractor extractor2 = new MediaExtractor();
extractor2.setDataSource(secondAudioFile);
MediaFormat trackFormat = extractor2.getTrackFormat(0);
tracksIdx.add(muxer.addTrack(trackFormat)); // Crashes here

For someone who reaches here, I found this official doc at link. Muxing Multiple Video/Audio Tracks seems not supported in old API versions and even restricted in the latest version.

Related

MediaPlayer only plays certain WAV files

So the app I'm writing is hitting an API and getting base64 encoded WAV voicemails. Decoding and creating the WAV file works but some will crash when calling setDataSource. I can pull the WAV files from my phones storage and play them on Mac so I know the file is being saved okay.
The Base64 I'm decoding looks different and I can tell whether MediaPlayer will play it or not based on looking at it. I have read that WAV file bitrates behave differently with MediaPlayer so I think that may be the issue? The Base64 that doesn't play is much shorter than the Base64 that does even with the same audio duration.
try {
val fileInputStream = FileInputStream(voicemailFile)
if (audioManager == null) {
audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager?
}
audioManager?.let { audioManager ->
audioManager.isSpeakerphoneOn = isSpeakerPhoneOn
audioManager.mode = AudioManager.MODE_IN_CALL
}
mediaPlayer.setAudioAttributes(AudioAttributes
.Builder()
.setUsage(AudioAttributes.USAGE_VOICE_COMMUNICATION)
.build())
mediaPlayer.setDataSource(fileInputStream.fd)
mediaPlayer.setOnCompletionListener {
finishPlayingVoicemail()
}
mediaPlayer.prepare()
currentProgress?.let { currentProgress ->
mediaPlayer.seekTo(currentProgress)
}
mediaPlayer.start()
shouldResumePlaying = true
fileInputStream.close()
} catch (e: Exception) {
e.printStackTrace()
}
Crash -
E/GenericSource: initFromDataSource, cannot create extractor!
E/GenericSource: Failed to init from data source!
E/MediaPlayerNative: error (1, -2147483648)
W/System.err: java.io.IOException: Prepare failed.: status=0x1
W/System.err: at android.media.MediaPlayer.prepare(MediaPlayer.java:1274)
Examples of the Base64 that plays vs not plays (Just shorter snippets of them)
UklGRi96AABXQVZFZm10IBQAAAARAAEAQB8AANcPAAAAAQQAAgD5AWZhY3QEAAAAoPAAAGRhdGH7eQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADwACqAA1CwyADYA2wQEgDeupwoNzQiub6tCiE1JYHJvLsYRDQSqMysiTBEM9QAGQCozLsKMDYkgcnLqxhTQxKozLuJQTQjktrbmikyNSKozLuLQDQkgrq9
^^ Does not play
UklGRqTvAwBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YYDvAwD//wAAAAD/////AAD/////AAAAAP///f/+/wEAAgD///z/+v/6//3//v/+//3//v/9//3//v8BAAEA/v/6//j//P8CAAAA9//1//v//P/4//f//P/+//z/9//0//j//v8AAPv/+f/7//z/+v/5//v//P/8//v/+f/6/////P/1//b//f/9//f/9f/6//3/+//4//b/+v8CAAAA9//0//r//f/8//v//f/+//3/+v/5//v///8AAP7//f8AAP///P/7//3//v/8//v//P///wIAAAD7//
^^ Does play
First sample is:
RIFF (little-endian) data, WAVE audio, IMA ADPCM, mono 8000 Hz
second is:
RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 16000 Hz
As per Supported media formats, under the details for PCM/WAVE:
8- and 16-bit linear PCM (rates up to limit of hardware). Sampling rates for raw PCM recordings at 8000, 16000 and 44100 Hz.
In determining whether those files are linear PCM or not, from my reading of Pulse-code modulation, the second is linear PCM, whereas the former is adaptive differential PCM (ADPCM) so it may not be supported, hence what you are seeing.
msbit's answer led me to looking at ways to play IMA ADPCM wav files on Android. I decided to try swapping out MediaPlayer for ExoPlayer. ExoPlayer seems to be playing the files MediaPlayer couldn't.
Let's try to play one .wav file. Let's say the name of your .wav file is test.wav and it is in res.raw folder in your android project. Please try the following lines of code and let me know.
Media mPlayer;
private Context mContext;
mPlayer = MediaPlayer.create(mContext, R.raw.test);
mPlayer.start();

An AAC audio stream is playable in VLC for Android, but not in Exoplayer

I have an RTMP stream I want to play in my app using the Exoplayer library. My setup for that is as follows:
TrackSelector trackSelector = new DefaultTrackSelector();
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new ExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
3000, // max buffer
1000, // playback
2000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addListener(mVideoStreamHandler);
mPlayer.addVideoListener(new VideoListener() {
#Override
public void onVideoSizeChanged(int width, int height, int unappliedRotationDegrees, float pixelWidthHeightRatio) {
Log.d("hasil", "onVideoSizeChanged: w:" + width + ", h:" + height);
String res = width + "x" + height;
resolution.setText(res);
}
#Override
public void onRenderedFirstFrame() {
}
});
Where createSource() is as follows:
private void createSource() {
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720));
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL));
}
My current problem is that only the first three ExtractorMediaSources work fine in Exoplayer. The mMediaSourceAudio refuses to play in Exoplayer, but works just fine in the VLC Media Player for Android.
Right now I have a suspicion that the format is AAC-LTP, or whatever AAC variant that requires a codec available in VLC but not in default Android. However, I do not have access to the encoding process so I don't know for sure.
If this isn't the case, what is it?
EDIT:
I've been debugging the BandwidthMeter and added a MediaSourceEventListener. When I use the normal Video sources, onDownstreamFormatChanged() gets called, but not when I use that Audio Stream source.
In addition, the BandwidthMeter works fine, with bytes always downloaded in all parts of the stream and more bytes when the video stream comes in, but only in the Audio only stream that, when I call mPlayer.getBufferedPosition(), the returned value is always 0. Also, when I use the Audio Stream source, no OMX code was called - no decoders were set up.
Am I seeing a malformed audio stream, or do I need to change my Exoplayer's settings?
EDIT 2:
Further debugging reveals that, in all the Video streams and Audio stream, the same FlvExtractor is used. Even though the Video streams have the avc video track encoding and mp4a-latm audio track encoding. Is this normal?
Turns out it's because the stream was recognized to have two tracks/sampleQueues. One Audio track, and one track with null format. That null track was supposed to be the video track, which was supposed to exist according to the stream's flvHeader flag.
For now, I get around this by creating a custom MediaSource using a custom MediaPeriod. Said custom MediaPeriod having code to separate the video and audio tracks of the SampleQueues, then using the audio-only SampleQueue[] instead of the source SampleQueue[] when I want to play the audio-only stream.
Though this gives me another point of concern: There's something one can do to alter the 'has audio track (flag & 0x04) and video track (flag & 0x01)' flag in the rtmp stream, right?
Thanks for the comments, I'm new to ExoPlayer. But your comments helped me in debugging and getting multiple workarounds to the issue.
I tried to use custom MediaSource and custom MediaPeriod to address this audio issue. I have observed video format data coming after audio data incase of video+audio wowza stream, so the function maybeFinishPrepare() will wait for getting both video and audio format tag data before invoking onPrepared, incase if video tagData is received first. Incase of audio data received first, it wont wait and will call onPrepare().
With the above changes, I was able to play audio alone and video_audio wowza streams, where rtmp tagHeader with tagTypes were coming in the order of video tagData and then followed by audio data.
I wasn't able to use the same patch with srs server to play both audio_only and video_audio streams with the same changes. srs server is giving tagData in the order of audio and then video tagData,
So, I debugged further in FlvExtractor. In readFlvHeader, I have overriden the hasAudio and hasVideo variables. These variables will be set based on the first few tagHeaders(5 or 6). I used peekFully on input for 6 times in a loop. In each loop after fetching tagType and tagDataSize, tagDataSize is used to input.advancePeekPosition(), and tagType is used to identify whether we have audio/video format data in tagData. After peeking for first 6 consecutive tagHeaders, I was able to get actual values of hasAudio and hasVideo, and ignored the flvHeaders.flags, which were used to set these variables.
Custom FlvExtractor workaround, looked cleaner than custom MediaSource/MediaPeriod, as we will create those many tracks as necessary, as we are setting proper hasVideo/hasAudio values.

Get audio input stream from a locale file on android

I am trying to get the audio input stream from a file in the local file system on an android device.
This is so that i can use this library to show a wave form for the audio file.
https://github.com/newventuresoftware/WaveformControl/blob/master/app/src/main/java/com/newventuresoftware/waveformdemo/MainActivity.java#L125
The example in the project uses rawResource like so
InputStream is = getResources().openRawResource(R.raw.jinglebells);
This input stream is later converted into byte array and passed to somewhere that uses it to paint a wave picture and sound.
however when I did
InputStream is = new InputFileSystem(new File(filePath));
But this does not seem to work properly. The image generated is wrong, and the sound played is nothing like what the file actually is.
This is the body of the function in that library that gets the input stream and convert it into byte arrays.
private short[] getAudioSample() throws IOException {
// If i replace this part with new FileInput(new File(filePath))
// the generated "samples" from it does not work properly with the library.
InputStream is = getResources().openRawResource(R.raw.jinglebells);
byte[] data;
try {
data = IOUtils.toByteArray(is);
} finally {
if (is != null) {
is.close();
}
}
ShortBuffer sb = ByteBuffer.wrap(data).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
short[] samples = new short[sb.limit()];
sb.get(samples);
return samples;
}
The sound file that I would like to get processed and pass to that library is created by a MediaRecorder with the following configurations
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.DEFAULT);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
Basically that library requires PCM samples. Using that 3gp FileInputStream generated by MediaRecorder directly is not gonna cut it.
What I did is to use MediaExtractor and MediaCodec to convert the 3gp audio data into PCM samples by samples. and then it is fed into that library. Then everything worked =)
The logic in encoding audio data can be almost directly taken from here this awesome github repo

MediaExtractor returns wrong number of tracks?

I'm trying to build a player that plays mpeg2-ts stream with 2 audio tracks using MediaCodec and MediaExtractor. When I set the URL to the extractor: extractor.setDataSource(URL) int the Logcat I can see that the framework has found the 2 audio tracks:
But afterwards I call:
int trackCount = extractor.getTrackCount();
for (int i = 0; i < track_count; i++){
format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
if (mime.startsWith("video/")) ...
if (mime.startsWith("audio/")) ...
}
trackCount aways equals 2(1 audio track & 1 video track). What am I doing wrong?
You're not doing anything wrong - it just seems that the MPEG2TSExtractor class (the actual implementation behind MediaExtractor for mpeg2 ts files) only supports one audio stream and one video stream.
See e.g. the init method in https://android.googlesource.com/platform/frameworks/av/+/1a9c3954a/media/libstagefright/mpeg2ts/MPEG2TSExtractor.cpp (lines 156-193). So if you need to demux any mpeg2 ts streams with multiple audio streams, you basically need to bundle a demuxer of your own.

How can i pause voice recording in Android?

My aim is to pause in recording file.
I see in Android developer site its but Media Recorder have not pause option.
Java supports merge two audio file programatically but In android its not work.
Join two WAV files from Java?
And also I used default device audio recorder Apps which is available in all device but in Samsung few devices have not returened recording path.
Intent intent = new Intent(MediaStore.Audio.Media.RECORD_SOUND_ACTION);
startActivityForResult(intent,REQUESTCODE_RECORDING);
Any one help for voice recording with pause functionality.
http://developer.android.com/reference/android/media/MediaRecorder.html
MediaRecorder does not have pause and resume methods. You need to use stop and start methods instead.
I had such a requirement in one of my projects, What we done was like make a raw file for saving recorded data in start of recording using AudioRecord , the for each resume we append the data to the same file
like
FileOutputStream fos= new FileOutputStream(filename, true);
here the filename is the name of the raw file and append the new recording data to it.
And when user stop the recording we will convert the entire raw file to .wav( or other) formats. Sorry that i cant post the entire code. Hope this will give you a direction to work.
You can refer my answer here if still have this issue. For API level >= 24 pause/resume methods are available in Android MediaRecorder class.
For API level < 24
Add below dependency in your gradle file:
compile 'com.googlecode.mp4parser:isoparser:1.0.2'
The solution is to stop recorder when user pause and start again on resume as already mentioned in many other answers in stackoverflow. Store all the audio/video files generated in an array and use below method to merge all media files. The example is taken from mp4parser library and modified little bit as per my need.
public static boolean mergeMediaFiles(boolean isAudio, String sourceFiles[], String targetFile) {
try {
String mediaKey = isAudio ? "soun" : "vide";
List<Movie> listMovies = new ArrayList<>();
for (String filename : sourceFiles) {
listMovies.add(MovieCreator.build(filename));
}
List<Track> listTracks = new LinkedList<>();
for (Movie movie : listMovies) {
for (Track track : movie.getTracks()) {
if (track.getHandler().equals(mediaKey)) {
listTracks.add(track);
}
}
}
Movie outputMovie = new Movie();
if (!listTracks.isEmpty()) {
outputMovie.addTrack(new AppendTrack(listTracks.toArray(new Track[listTracks.size()])));
}
Container container = new DefaultMp4Builder().build(outputMovie);
FileChannel fileChannel = new RandomAccessFile(String.format(targetFile), "rw").getChannel();
container.writeContainer(fileChannel);
fileChannel.close();
return true;
}
catch (IOException e) {
Log.e(LOG_TAG, "Error merging media files. exception: "+e.getMessage());
return false;
}
}
Use flag isAudio as true for Audio files and false for Video files.
You can't do it using Android API, but you can save a lot of mp4 files and merge it using mp4parser: powerful library written in Java. Also see my simple recorder with a "pause": https://github.com/lassana/continuous-audiorecorder.

Categories

Resources