Exoplayer throws Decoder initialisation exception for large mp4 files in android - android

Iam using Exoplayer to play videos as a playlist continuously in android . When I play low quality mp4 videos it works fine but when i try to play higher quality mp4 videos after playing one or two videos in the playlist the screen doesnot display anything and the log gives the following exception
com.google.android.exoplayer.MediaCodecTrackRenderer$DecoderInitializationException: Decoder init failed: OMX.amlogic.avc.decoder.awesome, MediaFormat(video/avc, 198826, 1920, 1080, -1.0, -1, -1, -1, -1, -1)
Even if i loop the same high quality video the first time it plays and then second time this exception is thrown . when the video size is more than 80mb this exception is thrown .is it some buffer size issue ? can someone please guide me . thankyou very much
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.adplayertexture);
AdplayerTexture=(TextureView)findViewById(R.id.AdPlayerTexture);
AdplayerTexture.setBackgroundColor(Color.BLACK);
AdplayerTexture.setSurfaceTextureListener(this);
}
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width,
int height) {
AdPlayerSurface = new Surface( surface);
playMedia(AdPlayerSurface);
}
private void playMedia(Surface surface){
mediaplayer=new ExoPlayer();
mediaplayer.play(this,Videopathlist[CurrentVideoIndex],surface;
mediaplayer.addListener(this);
}
#Override
public void onStateChanged(boolean playWhenReady, int playbackState) {
if (playbackState == ExoPlayer.STATE_ENDED) {
//releasing the resources
mediaplayer.DestroyPlayer();
AdPlayerSurface.release();
AdPlayerSurface=new Surface(AdplayerTexture.getSurfaceTexture());
CurrentVideoIndex++;
playMedia(AdPlayerSurface);
}
this is the function play() in root2mediaplayer class
public void playMedia(Activity playerActivity,String mediapath,final long Position,Surface mediasurface){
String Systemroot = Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_DOWNLOADS).getAbsolutePath();
try{
File myFile=new File(Systemroot + java.io.File.separator + "Videos"
+ java.io.File.separator
+ mediapath);
Uri uri = Uri.fromFile(myFile);
final int numRenderers = 2;
SampleSource sampleSource =
new FrameworkSampleSource(playerActivity, uri, /* headers */ null, numRenderers);
// Build the track renderers
TrackRenderer videoRenderer = new MediaCodecVideoTrackRenderer(sampleSource, MediaCodec.VIDEO_SCALING_MODE_SCALE_TO_FIT);
TrackRenderer audioRenderer = new MediaCodecAudioTrackRenderer(sampleSource);
// Build the ExoPlayer and start playback
MoviePlayer = ExoPlayer.Factory.newInstance(numRenderers);
MoviePlayer.prepare(videoRenderer, audioRenderer);
MoviePlayer.addListener(this);
// Pass the surface to the video renderer.
MoviePlayer.sendMessage(videoRenderer, MediaCodecVideoTrackRenderer.MSG_SET_SURFACE, mediasurface);
MoviePlayer.seekTo(Position);
MoviePlayer.setPlayWhenReady(true);
}catch(Exception e){
e.printStackTrace();
FileLog("exception in mediaplayer");
}

Without looking at your full code and without knowing which device/platform you see this issue (though from your question it looks like a h/w platform from AMLOGIC), I can only guess that may be you are not releasing the MediaCodec resources in your player when the playback ends and/or you switch playing new video.
MediaCodec is released in releaseCodec() API in https://github.com/google/ExoPlayer/blob/master/library/src/main/java/com/google/android/exoplayer/MediaCodecTrackRenderer.java.
You may want to check if that is indeed called when you stop playback of first video and start playback of next video in your playlist.
Typically, all high end mobile platforms have h/w based decoders that use limited and dedicated video memory(accessible only by h/w decoders) on the system to decode frames. In some platforms, you will not be able to create a decoder if some other app (or same app) in the system has created another instance of the same h/w based decoder, and not released it when it goes to background (in Activity Life cycle language, onStop etc) .
Additionally, if the dedicated video memory is not released when the decoders are destroyed, you will exhaust the limited video memory available on the platform in couple of video playback sessions due to the leak.
Look out for full platform adb logs when you create and destroy the MediaCodec instances (or in your case, stop and start playback of next video in playlist). That may give you some clues.
Hope my high level advice is of some use to you to hunt down the problem. Good luck!!!.

Instead of totally tearing down ExoPlayer and the Surface you should be able to reuse the same ExoPlayer and Surface instances.
Just stop ExoPlayer, create the new FrameworkSampleSource/MediaCodec{Audio,Video}TrackRender objects, and then call prepare again.
The new code would do something like:
MoviePlayer.stop()
MoviePlayer.seekTo(0)
new FrameworkSampleSource and MediaCodec{Audio,Video}TrackRenders
MoviePlayer.prepare(newVideoRenderer, newAudioRenderer)
Refer to the comments for the stop method:
/**
* Stops playback. Use {#code setPlayWhenReady(false)} rather than this method if the intention
* is to pause playback.
*
* Calling this method will cause the playback state to transition to
* {#link ExoPlayer#STATE_IDLE}. The player instance can still be used, and
* {#link ExoPlayer#release()} must still be called on the player if it's no longer required.
*
* Calling this method does not reset the playback position. If this player instance will be used
* to play another video from its start, then {#code seekTo(0)} should be called after stopping
* the player and before preparing it for the next video.
*/

Related

audio latency issues

In the application which I want to create, I face some technical obstacles. I have two music tracks in the application. For example, a user imports the music background as a first track. The second path is a voice recorded by the user to the rhythm of the first track played by the speaker device (or headphones). At this moment we face latency. After recording and playing back in the app, the user hears the loss of synchronisation between tracks, which occurs because of the microphone and speaker latencies.
Firstly, I try to detect the delay by filtering the input sound. I use android’s AudioRecord class, and the method read(). This method fills my short array with audio data.
I found that the initial values of this array are zeros so I decided to cut them out before I will start to write them into the output stream.
So I consider those zeros as a „warmup” latency of the microphone. Is this approach correct? This operation gives some results, but it doesn’t resolve the problem, and at this stage, I’m far away from that.
But the worse case is with the delay between starting the speakers and playing the music. This delay I cannot filter or detect. I tried to create some calibration feature which counts the delay. I play a „beep” sound through the speakers, and when I start to play it, I also begin to measure time. Then, I start recording and listen for this sound being detected by the microphone. When I recognise this sound in the app, I stop measuring time. I repeat this process several times, and the final value is the average from those results. That is how I try to measure the latency of the device. Now, when I have this value, I can simply shift the second track backwards to achieve synchronisation of both records (I will lose some initial milliseconds of the recording, but I skip this case, for now, there are some possibilities to fix it).
I thought that this approach would resolve the problem, but it turned out this is not as simple as I thought. I found two issues here:
1. Delay while playing two tracks simultaneously
2. Random in device audio latency.
The first: I play two tracks using AudioTrack class and I run method play() like this:
val firstTrack = //creating a track
val secondTrack = //creating a track
firstTrack.play()
secondTrack.play()
This code causes delays at the stage of playing tracks. Now, I don’t even have to think about latency while recording; I cannot play two tracks simultaneously without delays. I tested this with some external audio file (not recorded in my app) - I’m starting the same audio file using the code above, and I can see a delay. I also tried it with MediaPlayer class, and I have the same results. In this case, I even try to play tracks when callback OnPreparedListener invoke:
val firstTrack = //AudioPlayer
val secondTrack = //AudioPlayer
second.setOnPreparedListener {
first.start()
second.start()
}
And it doesn’t help.
I know that there is one more class provided by Android called SoundPool. According to the documentation, it can be better with playing tracks simultaneously, but I can’t use it because it supports only small audio files and that can't limit me.
How can I resolve this problem? How can I start playing two tracks precisely at the same time?
The second: Audio latency is not deterministic - sometimes it is smaller, and sometimes it’s huge, and it’s out of my hands. So measuring device latency can help but again - it cannot resolve the problem.
To sum up: is there any solution, which can give me exact latency per device (or app session?) or other triggers which detect actual delay, to provide the best synchronisation while playback two tracks at the same time?
Thank you in advance!
Synchronising audio for karaoke apps is tough. The main issue you seem to be facing is variable latency in the output stream.
This is almost certainly caused by "warm up" latency: the time it takes from hitting "play" on your backing track to the first frame of audio data being rendered by the audio device (e.g. headphones). This can have large variance and is difficult to measure.
The first (and easiest) thing to try is to use MODE_STREAM when constructing your AudioTrack and prime it with bufferSizeInBytes of data prior to calling play (more here). This should result in lower, more consistent "warm up" latency.
A better way is to use the Android NDK to have a continuously running audio stream which is just outputting silence until the moment you hit play, then start sending audio frames immediately. The only latency you have here is the continuous output latency.
If you decide to go down this route I recommend taking a look at the Oboe library (full disclosure: I am one of the authors).
To answer one of your specific questions...
Is there a way to calculate the latency of the audio output stream programatically?
Yes. The easiest way to explain this is with a code sample (this is C++ for the AAudio API but the principle is the same using Java AudioTrack):
// Get the index and time that a known audio frame was presented for playing
int64_t existingFrameIndex;
int64_t existingFramePresentationTime;
AAudioStream_getTimestamp(stream, CLOCK_MONOTONIC, &existingFrameIndex, &existingFramePresentationTime);
// Get the write index for the next audio frame
int64_t writeIndex = AAudioStream_getFramesWritten(stream);
// Calculate the number of frames between our known frame and the write index
int64_t frameIndexDelta = writeIndex - existingFrameIndex;
// Calculate the time which the next frame will be presented
int64_t frameTimeDelta = (frameIndexDelta * NANOS_PER_SECOND) / sampleRate_;
int64_t nextFramePresentationTime = existingFramePresentationTime + frameTimeDelta;
// Assume that the next frame will be written into the stream at the current time
int64_t nextFrameWriteTime = get_time_nanoseconds(CLOCK_MONOTONIC);
// Calculate the latency
*latencyMillis = (double) (nextFramePresentationTime - nextFrameWriteTime) / NANOS_PER_MILLISECOND;
A caveat: This method relies on accurate timestamps being reported by the audio hardware. I know this works on Google Pixel devices but have heard reports that it isn't so accurate on other devices so YMMV.
Following the answer of donturner, here's a Java version (that also uses other methods depending on the SDK version)
/** The audio latency has not been estimated yet */
private static long AUDIO_LATENCY_NOT_ESTIMATED = Long.MIN_VALUE+1;
/** The audio latency default value if we cannot estimate it */
private static long DEFAULT_AUDIO_LATENCY = 100L * 1000L * 1000L; // 100ms
/**
* Estimate the audio latency
*
* Not accurate at all, depends on SDK version, etc. But that's the best
* we can do.
*/
private static void estimateAudioLatency(AudioTrack track, long audioFramesWritten) {
long estimatedAudioLatency = AUDIO_LATENCY_NOT_ESTIMATED;
// First method. SDK >= 19.
if (Build.VERSION.SDK_INT >= 19 && track != null) {
AudioTimestamp audioTimestamp = new AudioTimestamp();
if (track.getTimestamp(audioTimestamp)) {
// Calculate the number of frames between our known frame and the write index
long frameIndexDelta = audioFramesWritten - audioTimestamp.framePosition;
// Calculate the time which the next frame will be presented
long frameTimeDelta = _framesToNanoSeconds(frameIndexDelta);
long nextFramePresentationTime = audioTimestamp.nanoTime + frameTimeDelta;
// Assume that the next frame will be written at the current time
long nextFrameWriteTime = System.nanoTime();
// Calculate the latency
estimatedAudioLatency = nextFramePresentationTime - nextFrameWriteTime;
}
}
// Second method. SDK >= 18.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED && Build.VERSION.SDK_INT >= 18) {
Method getLatencyMethod;
try {
getLatencyMethod = AudioTrack.class.getMethod("getLatency", (Class<?>[]) null);
estimatedAudioLatency = (Integer) getLatencyMethod.invoke(track, (Object[]) null) * 1000000L;
} catch (Exception ignored) {}
}
// If no method has successfully gave us a value, let's try a third method
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
AudioManager audioManager = (AudioManager) CRT.getInstance().getSystemService(Context.AUDIO_SERVICE);
try {
Method getOutputLatencyMethod = audioManager.getClass().getMethod("getOutputLatency", int.class);
estimatedAudioLatency = (Integer) getOutputLatencyMethod.invoke(audioManager, AudioManager.STREAM_MUSIC) * 1000000L;
} catch (Exception ignored) {}
}
// No method gave us a value. Let's use a default value. Better than nothing.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
estimatedAudioLatency = DEFAULT_AUDIO_LATENCY;
}
return estimatedAudioLatency
}
private static long _framesToNanoSeconds(long frames) {
return frames * 1000000000L / SAMPLE_RATE;
}
The android MediaPlayer class is notoriously slow to begin audio playback, I experienced an issue in an app I was creating where there was a greater than one second delay to begin playing an audio clip. I resolved it by switching to ExoPlayer which resulted in the playback starting within 100ms. I've also read that ffmpeg has even faster start audio startup time than ExoPlayer but I haven't used it so I can't make any promises.

Android Superpowered SDK Record and Playback simultaneously

My goal is to play local file while recording device's microphone input with low-latency.
I've come to Superpowered library, because from the documentation it provides low-latency feature.
I've created the player using SuperpoweredAdvancedAudioPlayer and SuperpoweredAndroidAudioIO and it plays fine.
SuperpoweredAndroidAudioIO has the construcor with parameters boolean enableInput, boolean enableOutput. Currently I'm using enableInput == false and enableOutput == true. When I put these parameters to true - no effect.
I wonder if it is possible to record file and play other file simultaneously?
Also there is SuperpoweredRecorder class in library but it says not for direct writing to disk. And need to use createWAV, fwrite, closeWAV methods.
I've tried implement Recorder separately but the quality is not good (it is two-three times faster than real recording + sound is distored).
Here is the simplest piece of code for recording I used:
void SuperpoweredFileRecorder::start(const char *destinationPath) {
file = createWAV(destinationPath, sampleRate, 2);
audioIO = new SuperpoweredAndroidAudioIO(sampleRate, bufferSize, true, false, audioProcessing, NULL, bufferSize); // Start audio input/output.
}
void SuperpoweredFileRecorder::stop() {
closeWAV(file);
audioIO->stop();
}
static bool audioProcessing(void *clientdata, short int *audioInputOutput, int numberOfSamples, int samplerate) {
fwrite(audioInputOutput, sizeof(short int), numberOfSamples, file);
return false;
}
Probably I cannot use Superpowered for that purpose and need to just make recording with OpenSL ES directly.
Thanks in advance!
After experiments I found the solution.
SuperpoweredRecorder works fine for recording tracks;
I've created to separate SuperpoweredAndroidAudioIO sources - one for playback and another for recorder. After some synchronization manipulation it works well (I minimized latency to very low level, so it suits my needs).
I post some code snippet with the idea I implemented:
https://bitbucket.org/snippets/kasurd/Mynnp/nativesuperpoweredrecorder-with
Hope it helps somebody!
You can do this with one instance of the SuperpoweredAndroidAudioIO with enableInput and enableOutput set to true.
The audio processing callback (audioProcessing() in your case) receives audio (microphone) in the audioInputOutput parameter. Just pass that to your SuperpoweredRecorder, and it will write it onto disk.
After that, do your SuperpoweredAdvancedAudioPlayer processing, and convert the result into audioInputOutput. That will go to the audio output.
So it's like, in pseudo-code:
audioProcessing(audioInputOutput) {
recorder->process(audioInputOutput)
player->process(some_buffer)
float_to_short_int(some_buffer, audioInputOutput)
}
Never do any fwrite in the audio processing callback, as it must complete within a very short time, and disk operations may be too slow.
For me this works when I double the numberOfSamples
fwrite(audioInputOutput, sizeof(short int), numberOfSamples * 2, file);
This will lead to a clear stereo output

Recording Videos in Chunks Using Media Recorder Android

I am implementing an Application that includes the functionality of saving Recorded Video in to Different Video Files based on a certain amount of Time.
For Achieving that i have implemented a Custom Camera and used the MediaRecorder.stop() and MediaRecorder.start() in a certain Loop.
But this approach is creating a Lag Effect while restarting Media Recorder (Stop and Start). Is it possible to seamlessly Stop and Start Recording using Media Recorder or any Third Party Library ?
Any help is Highly Appreciated.
I believe the best solution to implement chunks recording is to set maximum time in MediaRecorder Object
mMediaRecorder.setMaxDuration(CHUNK_TIME);
then you can attach an info listener, it will intimate you when it will hit maximum chunk time
mMediaRecorder.setOnInfoListener(new MediaRecorder.OnInfoListener() {
#Override
public void onInfo(MediaRecorder mr, int what, int extra) {
if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED) {
// restartVideo()
}
}
});
in restartVideo you should firstly clear previous MediaRecorder Object and start video again.
You can create two instances of MediaRecorder which will overlap slightly (i.e. when the stream is close to the end of the first chunk you can prepare and start the second one). It is possible to record 2 video files using 2 MediaRecorders at the same time if they capture only the video. Unfortunately sharing the mic between 2 MediaRecorder instances is not supported.

What is the best way to achieve Audio Video Synchronization in Android Based Media Player Application using MediaCodec API?

I'm trying to implement a Media Player in android using the MediaCodec API.
I've created three threads
Thread 1 : To de-queue the input buffers to get free indices and then queuing the audio and video frames in respective codec's input buffer
Thread 2 : To de-queue the audio codec's output buffer and render it using AudioTrack class' write method
Thread 3 : To de-queue the video codec's output buffer and render it using releaseBuffer method
I'm facing a lot of problem in achieving synchronization between audio and video frames. I never drop audio frames and before rendering video frames I check whether the decoded frames are late by more than 3omsecs, if they are I drop the frame, if they are more than 10ms early I don't render the frame.
To find the difference between audio and video I use following logic
public long calculateLateByUs(long timeUs) {
long nowUs = 0;
if (hasAudio && audioTrack != null) {
synchronized (audioTrack) {
if(first_audio_sample && startTimeUs >=0){
System.out.println("First video after audio Time Us: " + timeUs );
startTimeUs = -1;
first_audio_sample = false;
}
nowUs = (audioTrack.getPlaybackHeadPosition() * 1000000L) /
audioCodec.format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
}
} else if(!hasAudio){
nowUs = System.currentTimeMillis() * 1000;
startTimeUs = 0;
}else{
nowUs = System.currentTimeMillis() * 1000;
}
if (startTimeUs == -1) {
startTimeUs = nowUs - timeUs;
}
if(syslog){
System.out.println("Timing Statistics:");
System.out.println("Key Sample Rate :"+ audioCodec.format.getInteger(MediaFormat.KEY_SAMPLE_RATE) + " nowUs: " + nowUs + " startTimeUs: "+startTimeUs + " timeUs: "+timeUs + " return value :"+(nowUs - (startTimeUs + timeUs)));
}
return (nowUs - (startTimeUs + timeUs));
}
timeUs is the presentation time in micro-seconds of the video frame. nowUs is supposed to contain the duration in micro-seconds for which audio has been playing. startTimeUs is the initial difference between audio and video frames which has to be maintained always.
The first if block checks, if there is indeed an audio track and it has been initialized and sets the value of nowUs by calculating it from audiotrack
If there is no audio (first else) nowUs is set to SystemTime and the initial gap is set to zero. startTimeUs is initialized to zero in main function.
The if block in the synchronized block is used in case, first frame to be rendered is audio and audio frame joins later. first_audio_sample flag is initially set to true.
Please let me know if anything is not clear.
Also if you know of any open source link where media player of an a-v file has been implemented using video codec, that would be great.
If you are working on one of the latest releases of Android, you can consider retrieving the audioTimeStamp from AudioTrack directly. Please refer to this documentation for more details. Similarly, you could also consider retrieving the sampling rate via getSampleRate.
If you wish to continue with your algorithm, you could consider a relatively similar implementation in this native example. SimplePlayer implements a player engine by employing MediaCodec and has an a-v sync section too. Please refer to this section of code where the synchronization is performed. I feel this should help as a good reference.

Android Vitamio weired buffering on progressive download stream

I try to stream (progressive e.g: http://server.com/video.mp4)
when i use the standard google mediaplayer (VideoView from android package) and register an onBufferingUpdateListener then i get the bufferpercentage that refers to the download state of the hole video. This player has also a loading view where i can see the buffer state.
This bufferpercentage and view shows me how much of the video has been downloaded.
Now when i use the Vitamio player, the onBufferingUpdateListener shows me after a few seconds 99 percent of buffering and there is no loading view too. And when i pause the playback it stops buffering immediately instead of continue buffering like the google videoview does. This is very usefull if you have a slow http stream.
Is there a way to make the vitamio-videoplayer buffer the videofiles in the same way as the google videoplayer does?
thank you
daniel
Sorry i posted that question as wrong user. Here the Answer of what i tried:
VideoView (android default - just plays few video formats) from inside the android.widget and from io.vov.vitamio.widget (vitamio - plays most video formats) package has the same structure. In both you can register an OnBufferingUdateListener that returns the bufferstate in percent:
videoview.setOnBufferingUpdateListener(new io.vov.vitamio.MediaPlayer.OnBufferingUpdateListener() {
public void onBufferingUpdate(io.vov.vitamio.MediaPlayer mp, int i) {
Log.v(TAG, "Buffer percentage done: "+i);
}
});
or with the android default VideoView:
videoview.setOnBufferingUpdateListener(new android.media.MediaPlayer.OnBufferingUpdateListener() {
public void onBufferingUpdate(android.media.MediaPlayer mp, int i) {
Log.v(TAG, "Buffer percentage done: "+i);
}
});
If i use android.widget.VideoView the buffer percentage slowly increases until it reaches 100% - The video file has been downloaded completely. And it continues updating BufferingUpdate when i press the pause button.
When i use io.vov.vitamio.widget.VideoView the percentage reaches 100% within seconds. Then the video starts and the OnBufferingUpdateListener never gets called again (when i call getBufferPercentage it is always at 99 percent. That seems to be the reason). And as i sayed: It seems to stop buffering when i press the pause button.
I think the buffering works different in vitamio. But that's crap. Especially when i stream videos from the web and the video datarate is higher than the download speed i need to prebuffer the video by pressing pause and wait until it has downloaded enough data to watch it smoothly. Hope you got what i mean. thank you

Categories

Resources