I'm writing Unity for Android app which uses Vuforia Augmented Reality framework.
I need manually change camera exposure to reach better augmentation. The problem is that Vuforia hides all work with camera and make it inaccessible from outside.
Also Vuforia itself uses JNI to work with device's camera.
When I try to do next in my Android plugin:
Camera camera = Camera.open();
I got CAMERA_ERROR_EVICTED exception: "Camera was disconnected due to use by higher priority user."
Is it any way to get current Camera object using JNI or any other approach?
Thanks for any help.
Related
I'm developing an AR application using ARCore and got stuck in the following issue.
I want to get the Camera instance, that the ARCore session initializes, in order to configure it myself(change preview resolution, bitrate, white balance and so on).
Unfortunately, ARCore uses an object called Session(which is also part of ARCore lib) which uses Tango(was taken from Tango Project). The Tango object handles the hardware camera through JNI calls.
ARCore doesn't have API to configure the camera and getting it through reflection seems like a bad idea.
Anybody?
I am trying to build a data logger with machine vision camera function. The most important part from camera2 api is the ability to set focus to infinite to me.
I recently got a Lenovo Phab 2 pro and have been exploring Tango's motion tracking and depth map functions. I would like to record the pose estimation and depth map from Tango alongside my original camera image. Although Tango seems to use only the fisheye camera and time of flight camera for those tasks, I am not able to get any reading for pose and depth map whenever I open the main back facing camera using camera2. I have been doing some research on Tango to see if it allows manual focus control. Unfortunately,I couldn't find anything useful.
My questions are:
Is there a way to get Tango to work while having the back camera controlled by camera2?
If not, is there a way to manually control the main camera's focus using Tango api?
Thank you all very much!
Using Google Project Tango tablet, I want to take a selfie during an Area Learning usage session. That is, it's learned an area, now I want to have the AR tracking working while I take a selfie. I tried using the WebcamTexture Unity has to get at device 2 (the front facing camera), but the logcat says:
Unable to initialize camera: Fail to connect to camera service
My guess is Tango takes over all the cameras and disallows having this happen.
Is there a way around this? Can I temporarily suspend the AR camera(s), turn on the front camera for a while, save a frame of that, then stop the front camera, then resume the AR camera(s)? And would I be able to use IMU data to keep some sense of orientation while the AR camera(s) off? Using Unity.
In order to access camera from process other than Tango Service, you have to disconnect the Tango Service.
However, you should be able to get store a camera image from just AR Camera. See this post:
Unity plugin using OpenGL for Project Tango
In a project on Android, I'm trying to capture the video and process it in realtime (like a Kinect). I tried with two method: using OpenCV keep calling mCamera.grab() and capture.retrieve(mRgba,Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA); or the Android's Camera by keep capturing image.
I feel that the OpenCV camera's ability to capture image faster than the Android one. But why?
OpenCV uses a hack to get low level access to the Android camera. It allows to avoid several data copyings and transitions between native and managed layers.
Is it possible to control autofocus feature of Android camera, using OpenCV's libnative_camera*.so ?
Or maybe it's possible to manually set focus distance?
Is there alternative approach (May be, it's better to use Android API to control camera and then grab frame in onPreview events and pass it to native code)?
If your intention is to control the camera on your own, then Android Camera APIs suck. As such Android APIs suck when it comes to offering your hardware camera device number to JavaCV native camera library. Without a native device number, JavaCV would be clueless to connect to the appropriate camera (Front or Back).
If your intention is only to perform object detection and stuff, well Android Camera APIs coupled with JavaCV should work. Setup a callbackBuffer of sufficient size, setPreviewCallbackWithBuffer, setup sufficient preview frame-rate, and once you start getting preview-frames in ImageFormat.NV21 format (mind you!!, this is the only format supported for preview-frames even in ICS), pass them off to JavaCV for performing object detection.
AutoFocus on Android Camera APIs suck big time. I have been researching for over a month for feasible solutions.