Video mirroring in android - android

Recently, I was working with camerax for recording video with the front camera only. But I ran into an issue where the video is being mirrored after saving.
Currently, I am using a library(Mp4Composer-android) to mirror the video after recording which takes up a processing time. So, I noticed that Snapchat and Instagram are giving the output without this processing.
After that happened I also noticed that our native camera application is providing an option to select whether we want to mirror the video or not.
The configuration I have added to camerax,
videoCapture = VideoCapture
.Builder()
.apply {
setBitRate(2000000)
setVideoFrameRate(24)
}
.build()
How can I make my camera not mirror the video?

Temporary Solution:
I used this library as a temporary solution. The issue with this library was that I had to process the video after I record it and it took considerably some time to do it. I used this code:
Add this to gradle:
//Video Composer
implementation 'com.github.MasayukiSuda:Mp4Composer-android:v0.4.1'
Code for flipping:
Mp4Composer(videoPath, video)
.flipHorizontal(true)
.listener(
object : Mp4Composer.Listener {
override fun onProgress(progress: Double) { }
override fun onCurrentWrittenVideoTime(timeUs: Long) { }
override fun onCompleted() { }
override fun onCanceled() { }
override fun onFailed(exception: Exception?) { }
}
)
.start()
Note: This will compress your video too. Look into the library documentation for more details
An answer that was given to me by a senior developer who worked in a video-based NDK for a long time:
Think of the frames that are given out by the camera going through a dedicated highway. There is a way where we can capture all the frames going through that highway.
Capture the frames coming through that highway
Flip the pixels of each frame
Give out the frames through that same highway
He didn't specify how to capture and release the frames.
Why I didn't use that solution(The issue):
If we have to perform this action in real-time, We have to do that with high efficiency. Ranging with the quality of the camera, We have to capture anywhere from 24 frames to 120 frames per second and process and dispatch the frames.
If we need to do that, we need NDK developers and a lot of engineering which most startups can't afford.

Related

How to keep two or more mediacodec instance in Android

I hava a VERTICAL viewPager2, viewHolder in it use SurfaceView to play mp4;
And at Any time, it only has one viewHolder instance, one surface to play video.
I hava two interface:
IMediaCodecProxy
interface IMediaCodecProxy {
//all method like MediaCodec
fun selectCodec()
fun configureCodec()
fun release()
fun flush()
}
IMediaCodecPool
interface IMediaCodecPool {
fun getCurrentCodec(): MediaCodecWrapper?
fun selectorCodec(configure: MediaCodecAdapter.Configuration)
fun resetAndStoreCodec(codec: MediaCodecWrapper)
}
IMediaCodecProxy is a proxy of MediaCodec, its methods signature equals MediaCodec;
What IMediaCodecPool is a pool to keep all configured/started mediaCodec instances;
I want to keep mediaCodec configured state when it released;
What I do is:
step1. when video play completed, IMediaCodecProxy#release while work
step2. if current codec instance can keep(some thing I limited it)
step3. IMediaCodecPool#resetAndStoreCodec work
step4. surface destroy, and next viewHolder create -> nextViewHolder's surface created, IMediaCodecProxy#newCodec(I create a new instance of codec because two video's mimeType different), than IMediaCodecProxy#configure where failed
I don't know what I need do in IMediaCodecPool#resetAndStoreCodec.
I try use codec#flush, but it is not useful
So, I don't know what went wrong, maybe in surface? maybe in MediaCodec? maybe in codec's state.
Thanks for help!!

WebRTC ScreenCapturerAndroid

I'm trying to create a screen capturer app for android. I already have the webRTC portion setup with a video capturer using Camera2Enumerator library from here. How can I modify this to create a pre-recorded video capturer instead of camera capturer.
Thanks!
Just wanted to give an update that I have solved this. I'm unable to share the entire code but here's a process that might help:
Acquire one frame of your pre-recorded file and store in a byte array(must be in YUV format)
Replace the VideoCapturer() with the following:
fun onGetFrame(p0: ByteArray?) {
var timestampNS = java.util.concurrent.TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime())
var buffer:NV21Buffer = NV21Buffer(p0,288,352,null)
var videoFrame:VideoFrame = VideoFrame(buffer,0,timestampNS)
localVideoSource.capturerObserver.onFrameCaptured(videoFrame)
videoFrame.release()
}
where p0 is the byte array with the frame
Call this function in startLocalVideoCapture() using a timer (every few milliseconds...I used 10 nanoseconds) https://developer.android.com/reference/android/os/CountDownTimer
remove this line in startLocalVideoCapture()--->
VideoCapturer.initialize(
surfaceTextureHelper,
localVideoOutput.context,
localVideoSource.capturerObserver)

Spamming W/ImageReader_JNI: Unable to acquire a buffer item, very likely client tried to acquire more than maxImages buffers

I got an issue using Camera2 API and Google MLKit. What I try to do, for now, is just to log a message if a face is detected. But I have this issue:
It is spamming on console:
W/ImageReader_JNI: Unable to acquire a buffer item, very likely client tried to acquire more than maxImages buffers
And then crash my application with:
java.lang.IllegalStateException: maxImages (2) has already been acquired, call #close before acquiring more.
But as suggested by google for CameraX (I use Camera2 but I must do the same thing I think), I close the image I got with image.close() in the addOnCompleteListener.
Here is my code from the image reader:
val imageReader = ImageReader.newInstance(rotatedPreviewWidth, rotatedPreviewHeight,
ImageFormat.YUV_420_888, 2)
imageReader.setOnImageAvailableListener({
it.acquireLatestImage()?.let { image ->
val mlImage = InputImage.fromMediaImage(image, getRotationCompensation(cameraDevice.id, getInstance(), true))
val result = detector.process(mlImage)
.addOnSuccessListener {faces ->
if (faces.size > 0)
Log.d("photo", "Face found!")
else
Log.d("photo", "No face have been found")
}
.addOnFailureListener { e ->
Log.d("photo", "Error: $e")
}
.addOnCompleteListener {
image.close()
}
}
}, Handler { true })
What I think is happening is this:
Since the processing of google might be slow, the addOnCompleteListener is not called before the acquireLatestImage() is called to get a new image.
But I have no idea on how to prevent that :(, does anyone has an idea? Or maybe my asumptions about the problem are wrong?
And also to prevent the crash, I have increased the maxImages to 4, now it is just spamming "W/ImageReader_JNI: Unable to acquire a buffer item, very likely client tried to acquire more than maxImages buffers" for some time (and then stop) but no crash.
But I think this solution is a way to hide the problem instead of solving it.
EDIT: Increasing the maxImages amount just delay the crash which still happens later.
according to the best practice in MLKit developer guide:
"If you use the Camera or camera2 API, throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame. See the VisionProcessorBase class in the quickstart sample app for an example."
https://developers.google.com/ml-kit/vision/object-detection/android
Are the image.close method actually getting called? It's very likely that you'll need a few buffers acquired at the same time, but if 4 is not enough, it's possible you're not releasing them when the scanning is done. But maybe the processing is very slow, and you'll need more buffers to be available in parallel.
Note that if the processing cannot keep up with frame rate, you may need to drop frames manually to ensure you don't block the frame flow.

Beat matching crossfade with ExoPlayer

I want to implement a beat matching Crossfade feature using ExoPlayer. Basically I have a concept how it should work, but I find it hard to adapt it to ExoPlayer.
Let me please first write how I want to do this so you can understand the case.
As you probably know Beat Matching Crossfade let to seamlessly switch from one song to another. Additionally it adjusts second song tempo to the first song tempo during the crossfade.
So my plan is as follows:
1. Load song A and B so they both starts to buffer.
2. Decoded samples of song A and B are stored in buffers BF1 and BF2.
3. There would be a class called MUX which is a main buffer and contains both songs buffers, BF1 and BF2. MUX provides audio samples to the Player. Samples provided to the Player are BF1 samples or mixed samples from BF1 and BF2 if there is a crossfade.
4. When buffer reaches the crossfade point then samples are send to Analyser class so it can analyse samples from both buffers and modify them for crossfade. Analyser sends modified samples to MUX which updates it's main buffer.
When crossfade is finished then load a next song from playlist.
My main question is how to mix two songs so I can implement class like MUX.
What I know so far is that I can access decoded samples in MediaCodecRender.processOutputBuffer() method so from that point I could create my BF1 and BF2 buffers.
There was also an idea to create two instances of ExoPlayer and while first song is playing the second one is analysed and it's samples are modified for further crossfade, but I think it may be hard to synchronise two players so the beats would match.
Thanks in advance for any help!
Answering #David question about crossfade implementation it looks more or less like this. You have to listen for active player playback and call this method when you want to start crossfade. It's in RxJava but can be easily migrated to Couroutines Flow.
fun crossfadeObservable(
fadeOutPlayer: Player?,
fadeInPlayer: Player,
crossfadeDurationMs: Long,
crossfadeScheduler: Scheduler,
uiScheduler: Scheduler
): Observable<Unit> {
val fadeInMaxGain = fadeInPlayer.audioTrack?.volume ?: 1f
val fadeOutMaxGain = fadeOutPlayer?.audioTrack?.volume ?: 1f
fadeOutPlayer?.enableAudioFocus(false)
fadeInPlayer.enableAudioFocus(true)
fadeInPlayer.playerVolume = 0f
fadeInPlayer.play()
fadeOutPlayer?.playerVolume = fadeOutMaxGain
fadeOutPlayer?.play()
val iterations: Float = crossfadeDurationMs / CROSSFADE_STEP_MS.toFloat()
return Observable.interval(CROSSFADE_STEP_MS, TimeUnit.MILLISECONDS, crossfadeScheduler)
.take(iterations.toInt())
.map { iteration -> (iteration + 1) / iterations }
.filter { percentOfCrossfade -> percentOfCrossfade <= 1f }
.observeOn(uiScheduler)
.map { percentOfCrossfade ->
fadeInPlayer.playerVolume = percentOfCrossfade.coerceIn(0f, fadeInMaxGain)
fadeOutPlayer?.playerVolume = (fadeOutMaxGain - percentOfCrossfade).coerceIn(0f, fadeOutMaxGain)
}
.last()
.doOnTerminate { fadeOutPlayer?.pause() }
.doOnUnsubscribe { fadeOutPlayer?.pause() }
}
const val CROSSFADE_STEP_MS = 100L

Pause media recorder programmatically. Camera.apk from samsung galaxy has `this.mMediaRecorder.pause();` does not work in my code

Now, i have made a library to concatenate 2 videos, using the mp4parser library.
And with this i can pause and resume recording a video (after it records the second video, it appends it to the first one).
Now, my boss told me to do a wrapper, and use this for the phones that do not have hardware support for pausing a video. For phones that have that (Samsung Galaxy S2 and Samsung Galaxy S1 can pause a video recording , with their camera application), i need to do this with no libraries, so it would be fast.
How can I implement this native, if as seen on the media recorder state diagram, http://developer.android.com/reference/android/media/MediaRecorder.html , there is no pause state?
I have decompiled the Camera.apk app from an Samsung Galaxe Ace, and the code has in the CamcorderEngine.class a method like this:
public void doPauseVideoRecordingSync()
{
Log.v("CamcorderEngine", "doPauseVideoRecordingSync");
if (this.mMediaRecorder == null)
{
Log.e("CamcorderEngine", "MediaRecorder is not initialized.");
return;
}
if (!this.mMediaRecorderRecording)
{
Log.e("CamcorderEngine", "Recording is not started yet.");
return;
}
try
{
this.mMediaRecorder.pause();
enableAlertSound();
return;
}
catch (RuntimeException localRuntimeException)
{
Log.e("CamcorderEngine", "Could not pause media recorder. ", localRuntimeException);
enableAlertSound();
}
}
If I try this.mMediaRecorder.pause(); in my code, it does not work, how is this possible, they use the same import (android.media.MediaRecorder). Have they rewritten the whole code at a system level?
Is it possible to take the input stream of the second video (while recording it), and directly append this data to my first video?
for my concatenate method, i use 2 parameters (the 2 videos, which both are FileInputStream), is it possible to take the InputStream from the recording function and pass it as the second parameter?
If I try this.mMediaRecorder.pause();
The MediaRecorder class does not have a pause() function, so this is obvious that there is a custom MediaRecorder class on this specific device. This is not something unusual, as the only thing required from the OEMs is to pass the "android compatability tests" on the device; there is no restriction on adding functionality.
Is it possible to take the input stream of the second video (while
recording it), and directly append this data to my first video?
I am not sure if you can do this, because the video stream is encoded data (codec header, key frames, and so on), and just combining 2 streams into 1 file will not produce a valid video file in my opinion.
Basically what you can do:
get raw data images from camera preview surface (see Camera.setPreviewCallback())
use a android.media.MediaCodec to encode the video
and then use an OutputFilStream to write to the file.
This will give you the flexability you want, as in this case you in you app decide which frames get into encoder, and which do not.
However, it maybe an overkill for your specific project, as well as some performance issues may rise.
PS. Oh, an by the way, try taking a look at the MediaMuxer - maybe it can help you too. developer.android.com/reference/android/media/MediaMuxer.html

Categories

Resources