I am developing mobile app on Xamarin for android I try to use Camera2 class. Everything looks fine but this line occurs problem on convert type. It says (Java.Lang.Object -> Android.Hardware.Camera2.Params.Face[]) This line works on Android Studio but not in C#.
That's code I use on Xamarin. Other than face recognition, all builded requests works fine.
https://github.com/xamarin/monodroid-samples/tree/master/android5.0/Camera2Basic
Face[] faces = result.Get(CaptureResult.StatisticsFaces);
public class CameraCaptureListener : CameraCaptureSession.CaptureCallback
{
public FaceTrainActivityFragment Owner { get; set; }
public File File { get; set; }
public override void OnCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result)
{
Process(result);
}
public override void OnCaptureProgressed(CameraCaptureSession session, CaptureRequest request, CaptureResult partialResult)
{
Process(partialResult);
}
private void Process(CaptureResult result)
{
switch (Owner.mState)
{
case FaceTrainActivityFragment.STATE_PREVIEW:
{
if (result.Get(CaptureResult.StatisticsFaces) != null) {
//Face[] faces = result.Get(CaptureResult.StatisticsFaces);
//Face[] faces = (Face[])result.Get(CaptureResult.StatisticsFaces);
}
break;
}
}
}
}
it does not allowed me to compile even if I compile with using hard casting to (Face[]), it gives me same Java.Lang.Object error.
public void CreateCameraPreviewSession()
{
try
{
SurfaceTexture texture = mTextureView.SurfaceTexture;
if (texture == null)
{
throw new IllegalStateException("texture is null");
}
if (null == mCameraDevice) {
return;
}
// We configure the size of default buffer to be the size of camera preview we want.
texture.SetDefaultBufferSize(mPreviewSize.Width, mPreviewSize.Height);
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.CreateCaptureRequest(CameraTemplate.Preview);
mPreviewRequestBuilder.AddTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
List<Surface> surfaces = new List<Surface>();
surfaces.Add(surface);
//surfaces.Add(mImageReader.Surface);
setFaceDetect(mPreviewRequestBuilder, mFaceDetectMode);
mCameraDevice.CreateCaptureSession(surfaces, new CameraCaptureSessionCallback(this), null);
}
catch (CameraAccessException e)
{
e.PrintStackTrace();
}
and I am callling CreateCameraPreviewSession inside of Camera State Listener like that
public class CameraStateListener : CameraDevice.StateCallback
{
public FaceTrainActivityFragment owner;
public override void OnOpened(CameraDevice cameraDevice)
{
// This method is called when the camera is opened. We start camera preview here.
owner.mCameraOpenCloseLock.Release();
owner.mCameraDevice = cameraDevice;
owner.CreateCameraPreviewSession();
}
It says (Java.Lang.Object -> Android.Hardware.Camera.Params.Face[]) This line works on Android Studio but not in C#.
From the error you are getting, you are probably using the wrong namespace for Face. Instead of Android.Hardware.Camera.Params.Face, please use Android.Hardware.Camera2.Params.Face.
Related
I am trying to resume my Camera preview with android after putting the app to sleep or changing between apps. Or even starting a different app which uses the camera feature but the Camera crashed with getParameters() being null.
Is there a way to retrieve the control other the camera preview when resuming using the Xamarin forms application.
I have tried to use Camera.Restart() and didn't work.
public void SurfaceCreated(ISurfaceHolder holder)
{
try
{
if (Preview != null)
{
Preview.StopPreview();
Preview.Reconnect();
Preview.SetPreviewDisplay(holder);
Preview.EnableShutterSound(true);
}
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine(#" ERROR: ", ex.Message);
}
}
public void SurfaceDestroyed(ISurfaceHolder holder)
{
Preview.StopPreview();
Preview.Release();
}
public void SurfaceChanged(ISurfaceHolder holder, Android.Graphics.Format format, int width, int height)
{
Camera.Parameters parameters = Preview.GetParameters();
parameters.FocusMode = Camera.Parameters.FocusModeContinuousPicture;
IList<Camera.Size> previewSizes = parameters.SupportedPreviewSizes;
// You need to choose the most appropriate previewSize for your app
Camera.Size previewSize = previewSizes[0];
parameters.SetPreviewSize(previewSize.Width, previewSize.Height);
Preview.SetParameters(parameters);
Preview.StartPreview();
}
I was able to get it to work by reading more thoroughly about the behaviour of the Android camera(Camera1) hardware.
If you are working on Xamarin and trying to create camera view within the app the best way to do it is making a custom renderer and create a camera view in each platform. Like shown here:
https://learn.microsoft.com/en-ca/xamarin/xamarin-forms/app-fundamentals/custom-renderer/view
but this example only shows how to create the camera preview, there is no camera hardware lifecycle or taking a pictures included.
To solve the issue for the question above I had simply to do Camera.Open(0) to gain control over the camera again within the lifecycle of Xamarin forms pages.
Here is what I did(in CameraPreview class in Xamarin Forms):
Created Camera open event handler:
public event EventHandler CloseCameraRequest;
Created a method to invoke the event:
public void OpenCamera()
{
OpenCameraRequest?.Invoke(this, EventArgs.Empty);
}
Registered the handler in the android camera native class:
protected override void OnElementChanged(ElementChangedEventArgs<CameraPreview> e)
{
base.OnElementChanged(e);
if (Control == null)
{
_nativeCameraPreview = new NativeCameraPreview(Context);
_nativeCameraPreview.PhotoCaptured += OnPhotoCaptured;
SetNativeControl(_nativeCameraPreview);
}
Control.Preview = Camera.Open(0);
if (e.OldElement != null)
{
e.NewElement.OpenCameraRequest -= OnOpenCameraRequest;
}
if (e.NewElement != null)
{
e.NewElement.OpenCameraRequest += OnOpenCameraRequest;
}
}
private void OnOpenCameraRequest(object sender, EventArgs e)
{
Control.Preview = Camera.Open(0);
}
Invoked the request all the way from Xamarin forms page OnAppearing method:
protected override void OnAppearing()
{
base.OnAppearing();
CameraPreview.OpenCamera();
}
This fixed the issue of resuming camera preview after opening other application which uses the camera or putting the app to sleep where camera preview will timeout.
Is it possible to use webrtc VideoCapturer without peerconnection?
We have a working androidapp app (from examples/androidapp). We have taken following code from the working app into a separate activity where we use camera capturer directly without creating peerconnection. We create a video capturer (camera2) using an instance of CapturerObserver and then try to render it to org.webrtc.SurfaceViewRenderer. Below is the code.
As expected, onFrameCaptured of the CapturerObserver is being called multiple times with valid videoFrame object. From there, we pass it to SurfaceViewRenderer. However, video does not render and SurfaceViewRenderer remains black.
Is that a correct way of using VideoCapturer and SurfaceViewRenderer? Does it require any format conversion before sending to SurfaceViewRenderer?
private class MyCapturerObserver implements CapturerObserver {
#Override
public void onCapturerStarted(boolean b) {
Log.e(TAG, "capture started: " + b);
}
#Override
public void onCapturerStopped() {
Log.e(TAG, "capture stopped");
}
#Override
public void onFrameCaptured(final VideoFrame videoFrame) {
//fullscreenRenderer.onFrame(videoFrame);
runOnUiThread(new Runnable() {
#Override
public void run() {
fullscreenRenderer.onFrame(videoFrame);
}
});
}
}
capturer = createVideoCapturer();
captureObserver = new MyCapturerObserver();
surfaceTextureHelper =
SurfaceTextureHelper.create("CaptureThread", eglBase.getEglBaseContext());
capturer.initialize(surfaceTextureHelper, getApplicationContext(), captureObserver);
capturer.startCapture(1280, 720, 30);
Use factory.createVideoSource. You can use it before creating peerconnection. You can refer source code in the PeerConnectionClient.java
public VideoTrack createVideoTrack(VideoCapturer capturer) {
surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", rootEglBase.getEglBaseContext());
videoSource = factory.createVideoSource(capturer.isScreencast());
capturer.initialize(surfaceTextureHelper, appContext, videoSource.getCapturerObserver());
capturer.startCapture(videoWidth, videoHeight, videoFps);
localVideoTrack = factory.createVideoTrack(VIDEO_TRACK_ID, videoSource);
localVideoTrack.setEnabled(renderVideo);
localVideoTrack.addSink(localRender);
return localVideoTrack;
}
I am trying to detect when camera has focused (or has stopped trying to) so I am calling result.get(CaptureResult.CONTROL_AF_STATE)in onCaptureCompleted method of callback.
It kind of works for mode AF_MODE_CONTINUOUS_PICTURE, camera reports CONTROL_AF_STATE 1 or 2 (CONTROL_AF_STATE_PASSIVE_SCAN or CONTROL_AF_STATE_PASSIVE_LOCKED), which is nice.
However when camera is switched to AF_MODE_MACRO, then reported CONTROL_AF_STATE is always 0 (INNACTIVE) no matter what happens. I was trying to refer to 1 but probably I do not get it right.
Further info: when changing modes between AF_MODE_MACRO and AF_MODE_CONTINUOUS_PICTURE I always start new capture session like this:
private void configCaptureSession(boolean macroModeNew) {
this.macroMode = macroModeNew;
try {
// Wanna macro?
if (macroMode) {
LOGGER.d( "MACRO ON","");
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_MACRO);
}
else {
// Continuous AF
LOGGER.d( "MACRO OFF","");
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
}
// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
LOGGER.d( "SETTING NEW SESSION","");
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
} catch (final CameraAccessException e) {
LOGGER.e(e, "Exception!");
}
}
captureCallback:
private final CameraCaptureSession.CaptureCallback captureCallback =
new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureProgressed(
final CameraCaptureSession session,
final CaptureRequest request,
final CaptureResult partialResult) {}
#Override
public void onCaptureCompleted(
final CameraCaptureSession session,
final CaptureRequest request,
final TotalCaptureResult result) {
afState = result.get(CaptureResult.CONTROL_AF_STATE);
LOGGER.i("FOKKUS-MODE:"+result.get(CaptureResult.CONTROL_AF_MODE));
LOGGER.i("FOKKUS:"+result.get(CaptureResult.CONTROL_AF_STATE));
}
};
Does your device list support for AF_MODE_MACRO in https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics.html#CONTROL_AF_AVAILABLE_MODES ?
If not, then this is expected as you're trying to use a non-supported focusing mode.
If it is supported, the next issue is that I don't see you issuing an AF trigger command anywhere. Have you looked at the state transition tables for AF_STATE here:
https://developer.android.com/reference/android/hardware/camera2/CaptureResult.html#CONTROL_AF_STATE ?
For AF_AUTO and AF_MACRO, you have to trigger AF when you want a focus pass, and then wait for AF_STATE_FOCUSED_LOCKED or NOT_FOCUSED_LOCKED.
The continuous modes don't require a trigger to focus, which is why you're seeing something happening with them.
So, I managed to create the functionality i wanted with the old camera the way I wanted it.
With mCamera.autoFocus(autoFocusCallback); i detect when I have focus and run the required code while in preview-mode.
Now I have a hard time grasping how to do the same in camera2 API.
My first idea was that i'd use
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
// We have nothing to do when the camera preview is working normally.
int afState = result.get(CaptureResult.CONTROL_AF_STATE);
//if (CaptureResult.CONTROL_AF_STATE == afState) {
Log.d("SOME KIND OF FOCUS", "WE HAVE");
//}
break;
}
}
but I fail to find some kind of state that tells me we have gotten focus. Does someone have any idea how this can be done with Camera2 API?
For those interested I ended up with a mix of this:
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback() {
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
int afState = result.get(CaptureResult.CONTROL_AF_STATE);
if (CaptureResult.CONTROL_AF_TRIGGER_START == afState) {
if (areWeFocused) {
//Run specific task here
}
}
if (CaptureResult.CONTROL_AF_STATE_PASSIVE_FOCUSED == afState) {
areWeFocused = true;
} else {
areWeFocused = false;
}
break;
}
}
}
#Override
public void onCaptureProgressed(CameraCaptureSession session, CaptureRequest request,
CaptureResult partialResult) {
process(partialResult);
}
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request,
TotalCaptureResult result) {
process(result);
}
};
It works good enough :)
You've basically got it. The list of states you can check for and their transitions can be found here.
It depends on what CONTROL_AF_MODE you are using, but generally you check for FOCUSED_LOCKED or perhaps PASSIVE_FOCUSED, though you may want to have cases for NOT_FOCUSED_LOCKED and PASSIVE_UNFOCUSED in case the camera just cannot focus on the scene.
I wanted to make use of the zxing library to detect qrcodes in my app. But for the apps viewing purpose, i had to change the custom display orientation to portrait. Hence i had to integrate the whole zxing library into my app and addded camera.setDisplayOrientation(90) to the openDriver() method.
After doing this, the program works, but I get "Runtime exceptions : Fail to connect to camera service" randomly.
public void openDriver(SurfaceHolder holder) throws IOException {
if (camera == null) {
camera = Camera.open();
camera.setDisplayOrientation(90);
if (camera == null) {
throw new IOException();
}
}
camera.setPreviewDisplay(holder);
if (!initialized) {
initialized = true;
configManager.initFromCameraParameters(camera);
}
configManager.setDesiredCameraParameters(camera);
SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(context);
reverseImage = prefs.getBoolean(PreferencesActivity.KEY_REVERSE_IMAGE, false);
if (prefs.getBoolean(PreferencesActivity.KEY_FRONT_LIGHT, false)) {
FlashlightManager.enableFlashlight();
}
}
public void closeDriver() {
if (camera != null) {
FlashlightManager.disableFlashlight();
camera.release();
camera = null;
framingRect = null;
framingRectInPreview = null;
}
}
/**
* Asks the camera hardware to begin drawing preview frames to the screen.
*/
public void startPreview() {
if (camera != null && !previewing) {
camera.startPreview();
previewing = true;
}
}
/**
* Tells the camera to stop drawing preview frames.
*/
public void stopPreview() {
if (camera != null && previewing) {
if (!useOneShotPreviewCallback) {
camera.setPreviewCallback(null);
}
camera.stopPreview();
previewCallback.setHandler(null, 0);
autoFocusCallback.setHandler(null, 0);
previewing = false;
}
}
I doubt that the orientation change is causing that. I have found you will get that error whenever an activity stops but fails to call Camera.release in their onPause. The result is that the next time you try to do Camera.open you get that runtime error since the driver still considers it open regardless of the app/activity that opened it being gone.
You can easily get this to happen while debugging/testing stuff when something throws an exception and brings the activity down. You need to be very diligent about catching all exceptions and being sure to release the camera before finishing the activity.
BTW, are you finding you need to power cycle the device in order to be able to open the camera again?