How to keep two or more mediacodec instance in Android - android

I hava a VERTICAL viewPager2, viewHolder in it use SurfaceView to play mp4;
And at Any time, it only has one viewHolder instance, one surface to play video.
I hava two interface:
IMediaCodecProxy
interface IMediaCodecProxy {
//all method like MediaCodec
fun selectCodec()
fun configureCodec()
fun release()
fun flush()
}
IMediaCodecPool
interface IMediaCodecPool {
fun getCurrentCodec(): MediaCodecWrapper?
fun selectorCodec(configure: MediaCodecAdapter.Configuration)
fun resetAndStoreCodec(codec: MediaCodecWrapper)
}
IMediaCodecProxy is a proxy of MediaCodec, its methods signature equals MediaCodec;
What IMediaCodecPool is a pool to keep all configured/started mediaCodec instances;
I want to keep mediaCodec configured state when it released;
What I do is:
step1. when video play completed, IMediaCodecProxy#release while work
step2. if current codec instance can keep(some thing I limited it)
step3. IMediaCodecPool#resetAndStoreCodec work
step4. surface destroy, and next viewHolder create -> nextViewHolder's surface created, IMediaCodecProxy#newCodec(I create a new instance of codec because two video's mimeType different), than IMediaCodecProxy#configure where failed
I don't know what I need do in IMediaCodecPool#resetAndStoreCodec.
I try use codec#flush, but it is not useful
So, I don't know what went wrong, maybe in surface? maybe in MediaCodec? maybe in codec's state.
Thanks for help!!

Related

WebRTC ScreenCapturerAndroid

I'm trying to create a screen capturer app for android. I already have the webRTC portion setup with a video capturer using Camera2Enumerator library from here. How can I modify this to create a pre-recorded video capturer instead of camera capturer.
Thanks!
Just wanted to give an update that I have solved this. I'm unable to share the entire code but here's a process that might help:
Acquire one frame of your pre-recorded file and store in a byte array(must be in YUV format)
Replace the VideoCapturer() with the following:
fun onGetFrame(p0: ByteArray?) {
var timestampNS = java.util.concurrent.TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime())
var buffer:NV21Buffer = NV21Buffer(p0,288,352,null)
var videoFrame:VideoFrame = VideoFrame(buffer,0,timestampNS)
localVideoSource.capturerObserver.onFrameCaptured(videoFrame)
videoFrame.release()
}
where p0 is the byte array with the frame
Call this function in startLocalVideoCapture() using a timer (every few milliseconds...I used 10 nanoseconds) https://developer.android.com/reference/android/os/CountDownTimer
remove this line in startLocalVideoCapture()--->
VideoCapturer.initialize(
surfaceTextureHelper,
localVideoOutput.context,
localVideoSource.capturerObserver)

Video mirroring in android

Recently, I was working with camerax for recording video with the front camera only. But I ran into an issue where the video is being mirrored after saving.
Currently, I am using a library(Mp4Composer-android) to mirror the video after recording which takes up a processing time. So, I noticed that Snapchat and Instagram are giving the output without this processing.
After that happened I also noticed that our native camera application is providing an option to select whether we want to mirror the video or not.
The configuration I have added to camerax,
videoCapture = VideoCapture
.Builder()
.apply {
setBitRate(2000000)
setVideoFrameRate(24)
}
.build()
How can I make my camera not mirror the video?
Temporary Solution:
I used this library as a temporary solution. The issue with this library was that I had to process the video after I record it and it took considerably some time to do it. I used this code:
Add this to gradle:
//Video Composer
implementation 'com.github.MasayukiSuda:Mp4Composer-android:v0.4.1'
Code for flipping:
Mp4Composer(videoPath, video)
.flipHorizontal(true)
.listener(
object : Mp4Composer.Listener {
override fun onProgress(progress: Double) { }
override fun onCurrentWrittenVideoTime(timeUs: Long) { }
override fun onCompleted() { }
override fun onCanceled() { }
override fun onFailed(exception: Exception?) { }
}
)
.start()
Note: This will compress your video too. Look into the library documentation for more details
An answer that was given to me by a senior developer who worked in a video-based NDK for a long time:
Think of the frames that are given out by the camera going through a dedicated highway. There is a way where we can capture all the frames going through that highway.
Capture the frames coming through that highway
Flip the pixels of each frame
Give out the frames through that same highway
He didn't specify how to capture and release the frames.
Why I didn't use that solution(The issue):
If we have to perform this action in real-time, We have to do that with high efficiency. Ranging with the quality of the camera, We have to capture anywhere from 24 frames to 120 frames per second and process and dispatch the frames.
If we need to do that, we need NDK developers and a lot of engineering which most startups can't afford.

Beat matching crossfade with ExoPlayer

I want to implement a beat matching Crossfade feature using ExoPlayer. Basically I have a concept how it should work, but I find it hard to adapt it to ExoPlayer.
Let me please first write how I want to do this so you can understand the case.
As you probably know Beat Matching Crossfade let to seamlessly switch from one song to another. Additionally it adjusts second song tempo to the first song tempo during the crossfade.
So my plan is as follows:
1. Load song A and B so they both starts to buffer.
2. Decoded samples of song A and B are stored in buffers BF1 and BF2.
3. There would be a class called MUX which is a main buffer and contains both songs buffers, BF1 and BF2. MUX provides audio samples to the Player. Samples provided to the Player are BF1 samples or mixed samples from BF1 and BF2 if there is a crossfade.
4. When buffer reaches the crossfade point then samples are send to Analyser class so it can analyse samples from both buffers and modify them for crossfade. Analyser sends modified samples to MUX which updates it's main buffer.
When crossfade is finished then load a next song from playlist.
My main question is how to mix two songs so I can implement class like MUX.
What I know so far is that I can access decoded samples in MediaCodecRender.processOutputBuffer() method so from that point I could create my BF1 and BF2 buffers.
There was also an idea to create two instances of ExoPlayer and while first song is playing the second one is analysed and it's samples are modified for further crossfade, but I think it may be hard to synchronise two players so the beats would match.
Thanks in advance for any help!
Answering #David question about crossfade implementation it looks more or less like this. You have to listen for active player playback and call this method when you want to start crossfade. It's in RxJava but can be easily migrated to Couroutines Flow.
fun crossfadeObservable(
fadeOutPlayer: Player?,
fadeInPlayer: Player,
crossfadeDurationMs: Long,
crossfadeScheduler: Scheduler,
uiScheduler: Scheduler
): Observable<Unit> {
val fadeInMaxGain = fadeInPlayer.audioTrack?.volume ?: 1f
val fadeOutMaxGain = fadeOutPlayer?.audioTrack?.volume ?: 1f
fadeOutPlayer?.enableAudioFocus(false)
fadeInPlayer.enableAudioFocus(true)
fadeInPlayer.playerVolume = 0f
fadeInPlayer.play()
fadeOutPlayer?.playerVolume = fadeOutMaxGain
fadeOutPlayer?.play()
val iterations: Float = crossfadeDurationMs / CROSSFADE_STEP_MS.toFloat()
return Observable.interval(CROSSFADE_STEP_MS, TimeUnit.MILLISECONDS, crossfadeScheduler)
.take(iterations.toInt())
.map { iteration -> (iteration + 1) / iterations }
.filter { percentOfCrossfade -> percentOfCrossfade <= 1f }
.observeOn(uiScheduler)
.map { percentOfCrossfade ->
fadeInPlayer.playerVolume = percentOfCrossfade.coerceIn(0f, fadeInMaxGain)
fadeOutPlayer?.playerVolume = (fadeOutMaxGain - percentOfCrossfade).coerceIn(0f, fadeOutMaxGain)
}
.last()
.doOnTerminate { fadeOutPlayer?.pause() }
.doOnUnsubscribe { fadeOutPlayer?.pause() }
}
const val CROSSFADE_STEP_MS = 100L

get buffered data exoplayer

I am not very experienced building Android apps and I am trying to make a small app using ExoPlayer. So hopefully you guys can pardon my ignorance. I am essentially trying to see if there is a way to get access to the buffered files. I searched around, but there doesn't seem to be an answer for this. I saw people talking about cacheDataSource, but then I thought, isn't the data already being cache by virtue of it buffering? For instance, when a video starts, it start buffering. I t continues to do so even if pause is pressed. If I am understanding this correctly, the video actually plays from the buffered data. I assume that this data must be stored somewhere. Is this cache data in this case? if not, then what is cache data? what is the difference here? and finally, how can I actually get access to whatever this is? I'v been trying to see where its being stored as and as what(meaning some kind of file may be), and I reached the DefaultAllocator class, which seems to have this line
availableAllocations[i] = new Allocation(initialAllocationBlock,allocationOffset);//is this it??
this is in the DefaultAllocator.java file. Not sure if im looking in the right place...
I am not able to make sense of what the buffer even is and how its stored. Youtube stores .exo files. I can see a cache folder in data/data/myAppName/cache by printing the getCacheDir(), but that seems to be giving out some java.io.fileAndSomeRandomChars. The buffer also gets deleted when the player is minimized or another app is opened.
Does the ExoPlayer also store files in chunks?
Any insight on this would be seriously super helpful!. Iv been stuck on this for a few days now. Super duper appreciate it!
Buffers are not files, buffers are stored in application memory, and in this example they are instances of ByteBuffer class. ExoPlayer buffers are passed through instances of MediaCodecRenderer using processOutputBuffer() method.
Buffers are usually arrays of bytes or maybe some other kind of data, while ByteBuffer class adds some helpfull methods around it for tracking size of buffer ot its last accessed position using marker and so on.
The way how I access buffers is by extending the implementation of renderer that I am using and then override processOutputBuffer() like this:
public class CustomMediaCodecAudioRenderer extends MediaCodecAudioRenderer
{
#Override
protected boolean processOutputBuffer( long positionUs, long elapsedRealtimeUs, MediaCodec codec, ByteBuffer buffer, int bufferIndex, int bufferFlags, long bufferPresentationTimeUs, boolean shouldSkip ) throws ExoPlaybackException
{
boolean fullyProcessed;
//Here you use the buffer
doSomethingWithBuffer( buffer );
//Here we allow renderer to do its normal stuff
fullyProcessed = super.processOutputBuffer( positionUs,
elapsedRealtimeUs,
codec,
buffer,
bufferIndex,
bufferFlags,
bufferPresentationTimeUs,
shouldSkip );
return fullyProcessed;
}
}

Recording Videos in Chunks Using Media Recorder Android

I am implementing an Application that includes the functionality of saving Recorded Video in to Different Video Files based on a certain amount of Time.
For Achieving that i have implemented a Custom Camera and used the MediaRecorder.stop() and MediaRecorder.start() in a certain Loop.
But this approach is creating a Lag Effect while restarting Media Recorder (Stop and Start). Is it possible to seamlessly Stop and Start Recording using Media Recorder or any Third Party Library ?
Any help is Highly Appreciated.
I believe the best solution to implement chunks recording is to set maximum time in MediaRecorder Object
mMediaRecorder.setMaxDuration(CHUNK_TIME);
then you can attach an info listener, it will intimate you when it will hit maximum chunk time
mMediaRecorder.setOnInfoListener(new MediaRecorder.OnInfoListener() {
#Override
public void onInfo(MediaRecorder mr, int what, int extra) {
if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED) {
// restartVideo()
}
}
});
in restartVideo you should firstly clear previous MediaRecorder Object and start video again.
You can create two instances of MediaRecorder which will overlap slightly (i.e. when the stream is close to the end of the first chunk you can prepare and start the second one). It is possible to record 2 video files using 2 MediaRecorders at the same time if they capture only the video. Unfortunately sharing the mic between 2 MediaRecorder instances is not supported.

Categories

Resources