Pjsip Android Video Call: Inverted Video Preview - android

I am using Pjsip library for SIP Video call. I am facing issue
displying my own view in a SurfaceView.
Here is the image:
Expected View:
Fetching Preview ID in onCallMediaState
mVideoPreview = VideoPreview(mediaInfo.videoCapDev)
mVideoWindow = VideoWindow(mediaInfo.videoIncomingWindowId)
Code I have used to display this preview in my SurfaceView:
fun updateVideoPreview(holder: SurfaceHolder) {
if (SipManager.currentCall != null &&
SipManager.currentCall?.mVideoPreview != null) {
if (videoPreviewActive) {
val vidWH = VideoWindowHandle()
vidWH.handle?.setWindow(holder.surface)
val vidPrevParam = VideoPreviewOpParam()
vidPrevParam.window = vidWH
try {
SipManager.currentCall?.mVideoPreview?.start(vidPrevParam)
} catch (e: Exception) {
println(e)
}
} else {
try {
SipManager.currentCall?.mVideoPreview?.stop()
} catch (e: Exception) {
println(e)
}
}
}
}
I know that the person on other side will always recieve mirror view of my video. But in case of my own view, this should not happen.
What I feel is I am displaying the preview which is sent to the other person. I am not getting a single hint about how to display my own view(without mirror effect) using Pjsip library.
Can anyone please help me with this?

What I did is replaced SurfaceView with TextureView and then check:
if (isFrontCamera) {
val matrix = Matrix()
matrix.setScale(-1.0f, 1.0f)
matrix.postTranslate(width.toFloat(), 0.0f)
surfacePreviewCapture.setTransform(matrix)
}
And it worked.
Hope it help others. :)
====== UPDATE ======
When I checked my back camera, the view was also flipped over there so I need to do this to make it proper:
surfacePreviewCapture.setTransform(null)

Instead of using SurfaceView, you can use TextureView for your preview which you can flipped afterwards. Have a look at How to keep android from inverting the image from the front facing camera? as a reference

Related

How to detect EAR with ArCore android

So I am trying to put a earing on a person with ArCore. But the Face ArCore mask does not cover ears best i can do is at 172 but its still far away from the ear.
this is my code
private fun getRegionPose(region: FaceRegion): Vector3? {
val buffer = augmentedFace?.meshVertices
if (buffer != null) {
return when (region) {
FaceRegion.EAR ->
Vector3(
buffer.get(177 * 3),
buffer.get(177 * 3 + 1),
buffer.get(177 * 3 + 2)
)
}
}
return null
}
override fun onUpdate(frameTime: FrameTime?) {
super.onUpdate(frameTime)
augmentedFace?.let { face ->
getRegionPose(FaceRegion.EAR)?.let {
mustacheNode?.localPosition = Vector3(it.x, it.y - 0.035f, it.z + 0.015f)
mustacheNode?.localScale = Vector3(0.07f, 0.07f, 0.07f)
}
}
}
Can some one help me please is there a way for me to go outside the bonds of the face lnadmakrs?
Well after a lot of research i found this example and it was what lead me to my answer. So i leave the tutorial here hopefouly it can help some one.
https://www.kodeco.com/523-augmented-reality-in-android-with-google-s-face-api#toc-anchor-003

Chrome 84 Webview update causes stuttering while rendering MediaStream - Samsung Tab A 8.0

This was all working just fine until the Chrome 84 update, with 85 not fixing it either. Disabling chrome altogether on the impacted devices is my only current workaround. I am trying to render the camera preview from a Samsung Galaxy Tab A inside a cordova app.
First, I fetch the devices and select the rear facing camera if there are multiple:
navigator.mediaDevices.enumerateDevices().then(function(dev) {
ctrl.devices = dev.filter(function(el) { return el.kind == "videoinput"; });
if (ctrl.devices.length > 1) {
// default to the back facing camera
ctrl.selectedDevice = ctrl.devices[1];
ctrl.startCamera();
}
else if (ctrl.devices.length == 1) {
ctrl.selectedDevice = ctrl.devices[0];
ctrl.startCamera();
}
else {
ctrl.NoDeviceFound = true;
console.log("No camera found!");
}
});
Once I have a device selected, I start the camera up with this function:
ctrl.startCamera = function startCamera() {
ctrl.stopCamera(); //this function just stops the stream if one was already open
if (ctrl.selectedDevice) {
var constraints = {video: { deviceId: { exact: ctrl.selectedDevice.deviceId } }};
navigator.mediaDevices.getUserMedia(constraints).then(handleSuccess).catch(handleError);
}
}
My success handler is where the stream then gets injected into the video element and rendered:
function handleSuccess(stream) {
ctrl.stream = stream;
window.stream = stream; // make stream available to browser console
window.video = angular.element("video")[0];
if (window.screen.orientation.type.indexOf('landscape') == 0) {
video.width = angular.element("div.modal-body").width();
video.height = video.width / 1.33;
}
else {
video.height = angular.element("div.modal-body").height();
video.width = video.height * .75
}
if (typeof video.srcObject != "undefined") {
video.srcObject = stream;
}
else {
video.src = window.URL.createObjectURL(stream);
}
video.play();
}
I have a simple <video autoplay></video> element as my target.
Here's a video of what the preview looks like
It ends up just being the preview that is broken though. You can see that it's stuck on the first frame there, however if I go ahead and capture an image it will accurately reflect where the camera is pointed despite the rendered stream being broken. Occasionally instead of getting stuck on that first frame, it will sort of 'boomerang' between the first few frames.
Edit: I believe this issue is being tracked here
It was in fact this chromium bug, which has been fixed for Chrome 86 and confirmed working on my devices.

Android camera2 api. Setting multiple ImageReader surfaces gives blank output

I have a camera2 implementation. The current setup is, it uses a texture view surface to display the actual camera view and an ImageReader surface for capturing images.
Now I want to capture preview frames as well. So I tried adding a new ImageReader surface for capturing frames. But when I add that surface to createCaptureSession request, the screen goes blank. What could possibly be wrong? Below is the code that I use to add surfaces to createCaptureSession
val surface = preview.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val previewIRSurface = previewImageReader?.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val captureSurface = captureImageReader?.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
try {
val template = if (zsl) CameraDevice.TEMPLATE_ZERO_SHUTTER_LAG else CameraDevice.TEMPLATE_PREVIEW
previewRequestBuilder = camera?.createCaptureRequest(template)
?.apply { addTarget(surface) }
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val surfaces: ArrayList<Surface> = arrayListOf(surface, previewIRSurface, captureSurface)
camera?.createCaptureSession(surfaces, sessionCallback, backgroundHandler)
} catch (e: CameraAccessException) {
throw RuntimeException("Failed to start camera session")
}
The initialization of ImageReaders is like this.
private fun prepareImageReaders() {
val largestPreview = previewSizes.sizes(aspectRatio).last()
previewImageReader?.close()
previewImageReader = ImageReader.newInstance(
largestPreview.width,
largestPreview.height,
internalOutputFormat,
4 // maxImages
).apply { setOnImageAvailableListener(onPreviewImageAvailableListener, backgroundHandler) }
val largestPicture = pictureSizes.sizes(aspectRatio).last()
captureImageReader?.close()
captureImageReader = ImageReader.newInstance(
largestPicture.width,
largestPicture.height,
internalOutputFormat,
2 // maxImages
).apply { setOnImageAvailableListener(onCaptureImageAvailableListener, backgroundHandler) }
}
More clarifications about the parameters used above:
internalOutput format is either ImageFormat.JPEG or ImageFormat.YUV_420_888.
Image sizes are based on best possible sizes
It works good with either of the image readers individually but as soon as I add both together, blank screen!
Testing on Samsung Galaxy S8 with Android Oreo (8.0)
The original code is here https://github.com/pvasa/cameraview-ex/blob/development/cameraViewEx/src/main/api21/com/priyankvasa/android/cameraviewex/Camera2.kt
maxImages == 4 may be too much and exhaust your RAM. Also, it's not clear what internalOutputFormat you use, and whether it is compatible with the largestPreview size.
The bottom line is, study the long list of tables for supported surface list parameter of createCaptureSession(). Depending on your camera capabilities, the three surfaces that you use, could be too much.
From the comments below, a working solution: "The error itself doesn't say much [...] but upon searching, it is found that multiple surfaces are not supported for JPEG format. Upon changing it to YUV_420_888 it works flawlessly."

Stream video with bitmap as overlay

I'm new to wowza and is working on a project to live stream video captured from an Android device. I need to attach an image(dynamic one) to the video stream so that the users watching the stream can view it. The code I have tried is given below(as from the example source code from wowza):
// Read in a PNG file from the app resources as a bitmap
Bitmap overlayBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.overlay_logo);
// Initialize a bitmap renderer with the bitmap
mWZBitmap = new WZBitmap(overlayBitmap);
// Place the bitmap at top left of the display
mWZBitmap.setPosition(WZBitmap.LEFT, WZBitmap.TOP);
// Scale the bitmap initially to 75% of the display surface width
mWZBitmap.setScale(0.75f, WZBitmap.SURFACE_WIDTH);
// Register the bitmap renderer with the GoCoder camera preview view as a frame listener
mWZCameraView.registerFrameRenderer(mWZBitmap);
This works fine, but I don't want to show the image at the broadcasting end, the image should be visible only at the receiving end. Is there anyway to get this done?
I managed to get this done by registeringFrameRenderer and setting the bitmap inside onWZVideoFrameRendererDraw.
Code snippet is as given below(Kotlin):
private fun attachImageToBroadcast(scoreValue: ScoreUpdate) {
bitmap = getBitMap(scoreValue)
// Initialize a bitmap renderer with the bitmap
mWZBitmap = WZBitmap(bitmap)
// Position the bitmap in the display
mWZBitmap!!.setPosition(WZBitmap.LEFT, WZBitmap.TOP)
// Scale the bitmap initially
mWZBitmap!!.setScale(0.37f, WZBitmap.FRAME_WIDTH)
mWZBitmap!!.isVisible = false // as i dont want to show it initially
mWZCameraView!!.registerFrameRenderer(mWZBitmap)
mWZCameraView!!.registerFrameRenderer(VideoFrameRenderer())
}
private inner class VideoFrameRenderer : WZRenderAPI.VideoFrameRenderer {
override fun onWZVideoFrameRendererRelease(p0: WZGLES.EglEnv?) {
}
override fun onWZVideoFrameRendererDraw(p0: WZGLES.EglEnv?, framSize: WZSize?, p2: Int) {
mWZBitmap!!.setBitmap(bitmap) // note that the bitmap value gets changed once I get the new values
//I have implemented some flags and conditions to check whether a new value has been obtained and only if these values are satisfied, the setBitmap is called. Otherwise, as it is called continuously, flickering can occur in the screen
}
override fun isWZVideoFrameRendererActive(): Boolean {
return true
}
override fun onWZVideoFrameRendererInit(p0: WZGLES.EglEnv?) {
}
}
In iOS, we can implement WZVideoSink protocol to achieve this.
First, we need to update the scoreView with the latest score and then convert the view to an image.
Then we can embed this image to the captured frame using WZVideoSink protocol method.
A sample code is given below.
// MARK: - WZVideoSink Protocol
func videoFrameWasCaptured(_ imageBuffer: CVImageBuffer, framePresentationTime: CMTime, frameDuration: CMTime) {
if self.goCoder != nil && self.goCoder!.isStreaming {
let frameImage = CIImage(cvImageBuffer: imageBuffer)
var addCIImage: CIImage = CIImage()
if let scoreImage = self.getViewAsImage() {
// scoreImage is the image you want to embed.
addCIImage = CIImage(cgImage: scoreImage.cgImage!)
}
let filter = CIFilter(name: "CISourceOverCompositing")
filter?.setDefaults()
filter?.setValue(addCIImage, forKey: kCIInputImageKey)
filter?.setValue(frameImage, forKey: kCIInputBackgroundImageKey)
if let outputImage: CIImage = filter?.value(forKey: kCIOutputImageKey) as? CIImage {
let context = CIContext(options: nil)
context.render(outputImage, to: imageBuffer)
} else {
let context = CIContext(options: nil)
context.render(frameImage, to: imageBuffer)
}
}
}
func getViewAsImage() -> UIImage {
// convert scoreView to image
UIGraphicsBeginImageContextWithOptions(self.scoreView.bounds.size, false, 0.0)
self.scoreView.layer.render(in: UIGraphicsGetCurrentContext()!)
let scoreImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return scoreImage
}

Android GearVR, how to run the camera live view instead of playing videos or showing image?

how to run the camera live view instead of playing videos or showing image in GearVR ?
I tried with gearvrf sdk, but i am getting camera view is null. please guide me to solve this.
Thanks.
GVRCameraSceneObject cameraObject = null;
try {
cameraObject = new GVRCameraSceneObject(gvrContext, 3.6f, 2.0f);
cameraObject.setUpCameraForVrMode(1); // set up 60 fps camera preview.
} catch (GVRCameraSceneObject.GVRCameraAccessException e) {
// Cannot open camera
Log.e(TAG, "Cannot open the camera",e);
}
cameraObject.getTransform().setPosition(0.0f, 0.0f, -4.0f);
scene.getMainCameraRig().addChildObject(object);
Put this code into your GVRMain in onInit method. Obviously you should initialize your scene object

Categories

Resources