The idea
I am creating a save to device feature for a movie editing application that merges one video track with one (or optionally two) audio tracks.
First, there are multiple video clips that I merge into one single video track using MP4Parser (link).
Then, there are multiple audio clips that I would like to merge into one single audio track. These clips should not be appended, but merged into a single audio track at specific times. E.g. we have two audio clips (A1, A2) and a 60 sec video track (V1). These audio clips can be overlapping, or having white noise inbetween them. The length of the whole audio track has to match the Video track, that can be up to 60 seconds. There can be up to 100 audio clips added to the audio track 1
V1 - 60.0 s
A1 - 0.3 s
A2 - 1.1 s
Last, there might be an optional second audio track that contains a soundtrack as well, fit to the V1 video track.
Summary
This is how it would look like:
Video track 1: [--------------------------------------------------------------------------------] 60 sec
Audio track 1: [-A1--A2--------------------------------------------------------------------] 60 sec
Audio track 2: [-------------------------------------------------------------------------------] 60 sec
The problem
I tried approaching the problem by appending x second of white noise (empty wav file) to the audio track to get a full length track as described above, but that obviously would not work if the sounds are overlapping. What other ways can I try to tackle this problem?
private static final String OUTPUT = "output.mp4";
private static final String STORED_LOCATION = "/storage/emulated/0/"
/**
* Merges two videos that are located in /storage/emulated/0/ and saves it to the same place with the given parameters. Uses the ffmpeg/javacv library. All this is done in an Async task, not blocking the UI thread but showing a progress bar and a toast at the end.
*
*/
private void mergeVideosAsync()
{
new AsyncTask<Void, Void, String>()
{
#Override
protected String doInBackground(Void... arg0)
{
try
{
List<Movie> movieList = new ArrayList<>();
for (int i = 0; i < mVideoPathList.size(); ++i)
{
movieList.add(MovieCreator.build(new File(mVideoPathList.get(i)).getAbsolutePath()));
}
List<Track> videoTracks = new LinkedList<>();
List<Track> audioTracks = new LinkedList<>();
for (Movie m : movieList)
{
for (Track t : m.getTracks())
{
if (t.getHandler().equals("soun"))
{
//TODO: Add audio tracks here to the merging process
// audioTracks.add(t);
}
if (t.getHandler().equals("vide"))
{
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0)
{
result.addTrack(new AppendTrack(audioTracks.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0)
{
result.addTrack(new AppendTrack(videoTracks.toArray(new Track[videoTracks.size()])));
}
BasicContainer out = (BasicContainer) new DefaultMp4Builder().build(result);
mOutputPath = String.format(STORED_LOCATION + File.separator + OUTPUT_FILENAME);
WritableByteChannel fc = new RandomAccessFile(mOutputPath, "rw").getChannel();
out.writeContainer(fc);
fc.close();
}
catch (Exception e)
{
e.printStackTrace();
}
return mOutputPath;
}
}.execute();
}
If your audio tracks are overlapping then you have a problem as you'll need reencode the audio.
If the audio tracks are non-overlapping then you might be able to use the SilenceTrackImpl:
Track nuAudio = new AppendTrack(
audioTrackA1, new SilenceTrackImpl(100),
audioTrackA2, new SilenceTrackImpl(500),
audioTrackA3, new SilenceTrackImpl(1000),
audioTrackA4, new SilenceTrackImpl(50),
)
and so on.
Related
I can't find much documentation on the process of getting all of the media tracks (video audio and subtitles) using libvlc on android.
From what I understand, I have to parse the media, and I'm doing it like this:
Media media = new Media(libVLC, Uri.parse(url));
media.setEventListener(new IMedia.EventListener() {
#Override
public void onEvent(IMedia.Event event) {
switch (event.type){
case IMedia.Event.ParsedChanged:
if(event.getParsedStatus() == IMedia.ParsedStatus.Done){
Log.i("App", "Parse done, track count " + media.getTrackCount());
Gson gson = new Gson();
for(int i=0; i<media.getTrackCount(); i++){
Log.i("App", "Track " + i + ": " + gson.toJson(media.getTrack(i)));
}
}
break;
}
}
});
media.parseAsync();
vlc.setMedia(media);
vlc.play();
The results I get from this are odd: sometimes I get one track only, the video track, but sometimes I also get the audio track, so two tracks total.
The problem is that the media also have a subtitle track, so there must be a way for me to get all three tracks (Playing the same exact media with vlc on windows shows, indeed, all three tracks).
What am I doing wrong?
Edit: I need a way to dynamically get all tracks, the media could have n tracks so I don't know the exact number. This is just a test and I know there are three tracks.
Thanks
If you are not able to get the tracks from media, use VLC MediaPlayer object, VLC media player provides methods to get Audio Tracks, Video Tracks and Subtitle tracks using MediaPlayer object.
mMediaPlayer!!.setEventListener {
when (p0?.type) {
MediaPlayer.Event.Opening-> {
val audioTracks = mMediaPlayer!!.audioTracks
val subtitleTracks = mMediaPlayer!!.spuTracks
val videoTracks = mMediaPlayer!!.videoTracks
}
}
You can iterate over the lists to get individual tracks.
Currently, I have a server that streams four RTMP MediaSources, one with 720p video source, one with 360p video source, one with 180p video source, and one audio-only source. If I wanted to switch resolutions, I have to stop the ExoPlayer instance, prepare the other track I wanted to switch to, then play.
The code I use to prepare the ExoPlayer instance:
TrackSelection.Factory adaptiveTrackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter);
TrackSelector trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new AVControlExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
//noinspection deprecation
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
2000, // max buffer
1000, // playback
1000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addAnalyticsListener(mAnalyticsListener);
With createSource() being:
private void createSource() {
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_BOTH_AV);
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180()));
mMediaSource180.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource180"));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360()));
mMediaSource360.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource360"));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720()));
mMediaSource720.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource720"));
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_AUDIO_ONLY);
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL()));
mMediaSourceAudio.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSourceAudio"));
}
private void releaseSource() {
mMediaSource180.releaseSource(null);
mMediaSource360.releaseSource(null);
mMediaSource720.releaseSource(null);
mMediaSourceAudio.releaseSource(null);
}
And the code I currently use to switch between these MediaSources is:
private void changeTrack(MediaSource source) {
if (currentMediaSource == source) return;
try {
this.currentMediaSource = source;
mPlayer.stop(true);
mPlayer.prepare(source, true, true);
mPlayer.setPlayWhenReady(true);
if (source == mMediaSourceAudio) {
if (!audioOnly) {
try {
TransitionManager.beginDelayedTransition(rootView);
} catch (Exception ignored) {
}
layAudioOnly.setVisibility(View.VISIBLE);
vwExoPlayer.setVisibility(View.INVISIBLE);
audioOnly = true;
try {
GameQnAFragment fragment = findFragment(GameQnAFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
try {
GamePollingFragment fragment = findFragment(GamePollingFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
}
} else {
if (audioOnly) {
TransitionManager.beginDelayedTransition(rootView);
layAudioOnly.setVisibility(View.GONE);
vwExoPlayer.setVisibility(View.VISIBLE);
audioOnly = false;
}
}
} catch (Exception ignore) {
}
}
I wanted to implement a seamless switching between these MediaSources so that I don't need to stop and re-prepare, but it appears that this feature is not supported by ExoPlayer.
In addition, logging each MediaSource structure with the following code:
MappingTrackSelector.MappedTrackInfo info = ((DefaultTrackSelector)trackSelector).getCurrentMappedTrackInfo();
if(info != null) {
for (int i = 0; i < info.getRendererCount(); i++) {
TrackGroupArray trackGroups = info.getTrackGroups(i);
if (trackGroups.length != 0) {
for(int j = 0; j < trackGroups.length; j++) {
TrackGroup tg = trackGroups.get(j);
for(int k = 0; k < tg.length; k++) {
Log.i("track_info_"+i+"-"+j+"-"+k, tg.getFormat(k)+"");
}
}
}
}
}
Just nets me 1 video format and 1 audio format each.
My current workaround is to prepare another ExoPlayer instance in the background, replace the currently running instance with that upon preparations being complete, and release the old instance. That reduces the lag between the MediaSources somewhat, but doesn't come close to achieving seamless resolution changes like Youtube.
Should I implement my own TrackSelector and jam-pack all the 4 sources into that, should I implement another MediaSource that handles all 4 sources, or should I just tell the colleague who maintains the streams to switch to just one RTMP MediaSource with a sort of manifest that lists all the resolutions available for the AdaptiveTrackSelection to switch between them?
Adaptive Bit Rate Streaming is designed to allow easy switching between different bit rate streams, but it requires the streams to be segmented and the player to download the video segment by segment.
In this way the player can decide which bit rate to choose for the next segment depending on the current network conditions (and the device display size and t type). The player is able to seamlessly, apart from the different bitrate and quality, move from one bit rate to another this way.
See here for some more info: https://stackoverflow.com/a/42365034/334402
All the above relies on a delivery protocol which supports this segmentation and different bit rate streams. The most common ones today are HLS and MPEG-DASH.
The easiest way to support what I think you are looking for would be for you colleague who is supplying the stream to supply it using HLS and/or DASH.
Note that at the moment, both HLS and DASH are required as apple devices require HLS while other devices tend to default to DASH. Traditionally HLS used TS as the container for the video in the segments and DASH used fragmented MP4, but there is now a move for both to use CMAF, which is essentially fragmented MP4.
So in theory a single set of bit rate videos can be used for HLS and DASH now - in practice this will depend on whether your content is encrypted or not, as HLS and apple used one encryption mode and everyone else another in the past. This is changing now also but will take time before all devices support the new approach, where all devices can support the same encryption mode, so if your streams are encrypted this is an added complication at the moment.
Issue description
I'm trying to implement Exoplayer to play background music in loop and speech sounds at the same time that are played on click events.
I can't add silence at the end of the file because I play them in raw to make a sentence like :
file one : "select"
file two : "a category"
That's why I can't have a silence gap between 2 files and there are about 1500 files in the app so I do that to save space.
At first I was with MediaPlayer but figured out that on certain phone (One plus A0001 ) I can't have 2 Mediaplayers working at the same time so I decided to go with Exoplayer.
The 2 players are running in a service.
Background music are ogg files in the raw folder and works great.
I couldn't manage to play mp3 speech file from raw folder so i put them in asset folder and I manage to play them.
The problem I am facing now is that at the end of every mp3 files is cut on some of my devices. The length of the cut depends of the device.
Did I made something wrong ?
Thanks for your help
This is my code for the soundPlayer
DefaultRenderersFactory renderersFactorySound = new DefaultRenderersFactory(this,null, DefaultRenderersFactory.EXTENSION_RENDERER_MODE_OFF);
exoPlayerSound = ExoPlayerFactory.newSimpleInstance(renderersFactorySound, new DefaultTrackSelector(), new DefaultLoadControl());
exoPlayerSound.addListener(playerSoundEventListener);
exoPlayerSound.setVolume(1.0f);
exoPlayerSound.setRepeatMode(Player.REPEAT_MODE_OFF);
final AssetDataSource dataSource = new AssetDataSource(this);
DataSpec dataSpec = new DataSpec(Uri.parse("asset:///sounds/" + soundsToPlay.get(0) + ".mp3"));
try {
dataSource.open(dataSpec);
} catch (AssetDataSource.AssetDataSourceException e) {
e.printStackTrace();
}
DataSource.Factory factoryMusic = new DataSource.Factory() {
#Override
public DataSource createDataSource() {
return dataSource;
}
};
MediaSource audioSource = new ExtractorMediaSource(Uri.parse("asset:///sounds/" + soundsToPlay.get(0) + ".mp3"), factoryMusic, Mp3Extractor.FACTORY, null, null);
exoPlayerSound.prepare(audioSource);
exoPlayerSound.setPlayWhenReady(true);
This is my code for the musicPlayer
DefaultRenderersFactory renderersFactoryMusic = new DefaultRenderersFactory(this, null, DefaultRenderersFactory.EXTENSION_RENDERER_MODE_OFF);
playerMusic = ExoPlayerFactory.newSimpleInstance(renderersFactoryMusic, new DefaultTrackSelector(), new DefaultLoadControl());
playerMusic.setVolume(0.4f);
playerMusic.setRepeatMode(Player.REPEAT_MODE_ONE);
DataSpec dataSpecMusic = new DataSpec(RawResourceDataSource.buildRawResourceUri(songId));
final RawResourceDataSource rawResourceDataSourceMusic = new RawResourceDataSource(this);
try {
rawResourceDataSourceMusic.open(dataSpecMusic);
} catch (RawResourceDataSource.RawResourceDataSourceException e) {
e.printStackTrace();
}
DataSource.Factory factoryMusic = new DataSource.Factory() {
#Override
public DataSource createDataSource() {
return rawResourceDataSourceMusic;
}
};
MediaSource audioSourceMusic = new ExtractorMediaSource(rawResourceDataSourceMusic.getUri(), factoryMusic, OggExtractor.FACTORY, null, null);
playerMusic.prepare(audioSourceMusic);
playerMusic.setPlayWhenReady(true);
Version of ExoPlayer being used
compile 'com.google.android.exoplayer:exoplayer-core:r2.5.4'
Device(s) and version(s) of Android being used
Devices with problem :
Samung Galaxy note 3 SM-N9005 (Android v5.0)
Acer Iconia B1-710 (Android v4.1.2)
Device without problem :
One plus A0001 (Android v6.0.1)
Samung Galaxy note 1 GT-N7000 (Android v4.1.2)
I have a audio recording in multiple files. I am creating one continues audio file using com.googlecode.mp4parser:isoparser:1.0.2 library.
Below is my code :
String mediaKey = isAudio ? "soun" : "vide";
List<Movie> listMovies = new ArrayList<>();
for (String filename : sourceFiles) {
listMovies.add(MovieCreator.build(filename));
}
List<Track> listTracks = new LinkedList<>();
for (Movie movie : listMovies) {
for (Track track : movie.getTracks()) {
if (track.getHandler().equals(mediaKey)) {
listTracks.add(track);
}
}
}
Movie outputMovie = new Movie();
if (!listTracks.isEmpty()) {
outputMovie.addTrack(new AppendTrack(listTracks.toArray(new Track[listTracks.size()])));
}
Container container = new DefaultMp4Builder().build(outputMovie);
FileChannel fileChannel = new RandomAccessFile(String.format(targetFile), "rw").getChannel();
container.writeContainer(fileChannel);
fileChannel.close();
Above code runs on Android phone. As its mobile environment there are memory limitations per application.
Problem with above code is when I load Movie from file and create a track list for small files its working fine. But as file size grows the operation starts to become non-responsive. It takes lot of memory. How can I make it memory efficient. Is their any way of doing this operations in small streams as we do in case of file copy operations in Java ?
Update :
For recording audio in files, I am using android MediaRecorder for this operations with Output format as MPEG_4 and Audio encoder as AAC
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
I need to rotate a video to adjust some of my needs. I'll explain the details on the following list.
I'm creating a Vine like app. I have to record video segments and then merge all the parts into just one file. I'm doing this without issue on an Android app using mp4 parser library with last version 1.0-RC-26 using the example provided on their website: here
The append video example works fine if all the videos have the same orientation but I discovered some issues recording video from the front camera so the quick solution was to set the video orientation recording on 270. The bad part on this solution is that the segment with this orientation appear with the wrong orientation on the merged video.
My possible solution to this is to rotate the video to apply what I need in different situations but I'm not having a working example with my code. Searching the internet I found solutions like this one here. The problem with this code is that is not compatible with the last version (It gives an compilation error) . I tried too to understand the logic of the library but I'm not having results. For example I experimented using the setMatrix instruction on the Movie object but It simply don't work.
public static void mergeVideo(int SegmentNumber) throws Exception {
Log.d("PM", "Merge process started");
Movie[] inMovies = new Movie[SegmentNumber] ;
//long[] Matrix = new long[SegmentNumber];
for (int i = 1 ; i <= SegmentNumber; i++){
File file = new File(getCompleteFilePath(i));
if (file.exists()){
FileInputStream fis = new FileInputStream(getCompleteFilePath(i));
//Set rotation I tried to experiment with this instruction but is not working
inMovies [i-1].setMatrix(Matrix.ROTATE_90);
inMovies [i-1] = MovieCreator.build(fis.getChannel());
Log.d("PM", "Video " + i + " merged" );
}
//fis.close();
}
List<Track> videoTracks = new LinkedList<Track>();
List<Track> audioTracks = new LinkedList<Track>();
for (Movie m : inMovies) {
for (Track t : m.getTracks()) {
if (t.getHandler().equals("soun")) {
audioTracks.add(t);
}
if (t.getHandler().equals("vide")) {
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0) {
result.addTrack(new AppendTrack(audioTracks.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0) {
result.addTrack(new AppendTrack(videoTracks.toArray(new Track[videoTracks.size()])));
}
Container out = new DefaultMp4Builder().build(result);
//out.getMovieBox().getMovieHeaderBox().setMatrix(Matrix.ROTATE_180); //set orientation, default merged video have wrong orientation
// Create a media file name
//
String filename = getCompleteMergedVideoFilePath() ;
FileChannel fc = new RandomAccessFile(String.format(filename), "rw").getChannel();
out.writeContainer(fc);
fc.close();
//don't leave until the file is on his place
File file = new File (filename);
do {
if (! file.exists()){
Log.d("PM", "Result file not ready");
}
} while (! file.exists() );
//
Log.d("PM", "Merge process finished");
}
Have someone rotated video with the very last version of Mp4 parser? English is not my native language so I apologize any grammar error.
for (int i = 1; i <= SegmentNumber; i++) {
IsoFile isoFile = new IsoFile(getCompleteFilePath(i));
Movie m = new Movie();
List<TrackBox> trackBoxes = isoFile.getMovieBox().getBoxes(
TrackBox.class);
for (TrackBox trackBox : trackBoxes) {
trackBox.getTrackHeaderBox().setMatrix(Matrix.ROTATE_90);
m.addTrack(new Mp4TrackImpl(trackBox));
}
inMovies[i - 1] = m;
}
This is what I did to rotate a video.