I am trying to build a data logger with machine vision camera function. The most important part from camera2 api is the ability to set focus to infinite to me.
I recently got a Lenovo Phab 2 pro and have been exploring Tango's motion tracking and depth map functions. I would like to record the pose estimation and depth map from Tango alongside my original camera image. Although Tango seems to use only the fisheye camera and time of flight camera for those tasks, I am not able to get any reading for pose and depth map whenever I open the main back facing camera using camera2. I have been doing some research on Tango to see if it allows manual focus control. Unfortunately,I couldn't find anything useful.
My questions are:
Is there a way to get Tango to work while having the back camera controlled by camera2?
If not, is there a way to manually control the main camera's focus using Tango api?
Thank you all very much!
Related
I want to apply offsets to both translation and rotation of ArCore's virtual camera pose(displayOrientedCameraPose). Is there any way I can do that ? ArCore's camera only lets me read the current pose and not edit/update the same. Trying to create another virtual camera that will have the Pose with offsets applied doesn't work since a frame can have only one camera.
Unlike many others I have started working with ArCore first with Unity and now moving to Android Studio. In Unity it was quite staright-forward since it supports multiple camera rendering. Wondering if anything similar is possible with Android Studio ?
At the moment ARCore allows you to use only one active ArSession which contains only one ArCamera, i.e. camera in your smartphone. Changing ArCamera's Pose is highly useless because 3D tracking heavily depends on its Pose (every ArFrame stores camera position and rotation as well as all scene's ArAnchors and feature points).
Instead of reposition and reorientation of your ArCamera you can move/rotate the whole ArScene.
Hope this helps.
Is it possible to use the device's front facing camera with ARCore? I see no references in Google's docs.
I can't find any reference in the docs. It would also make the whole AR process much more complicated. You have to invert the logic for moving the camera etc. And it's much harder to recognize planes as the user is always in the way. Right now, ARCore only recognizes planes so you can't detect feature points, e.g. in the face of a user.
The answer is: Yes.
With a front-facing camera (without a depth sensor) on any supported Android device you can track users' face since ARCore 1.7 SDK release. The API allowing you to work with a front-facing camera was named Augmented Faces. So, from now you can create a high quality 468-point 3D mesh that your Android app can overlay on a user’s face to bring fun animated effects.
I was reading the article released by Google on Portrait mode for their Pixel-2, wherein they have explained their approach. In their, they mention, how they have used the PDAF system, for computing the depth map.
I am trying to find if there is a way for manipulating this sensor using Android Studio. or If there is any other way, for manipulating the sensor, using any other approach etc.
How does, one have complete access to the Camera imaging pipeline in android?.
Using Google Project Tango tablet, I want to take a selfie during an Area Learning usage session. That is, it's learned an area, now I want to have the AR tracking working while I take a selfie. I tried using the WebcamTexture Unity has to get at device 2 (the front facing camera), but the logcat says:
Unable to initialize camera: Fail to connect to camera service
My guess is Tango takes over all the cameras and disallows having this happen.
Is there a way around this? Can I temporarily suspend the AR camera(s), turn on the front camera for a while, save a frame of that, then stop the front camera, then resume the AR camera(s)? And would I be able to use IMU data to keep some sense of orientation while the AR camera(s) off? Using Unity.
In order to access camera from process other than Tango Service, you have to disconnect the Tango Service.
However, you should be able to get store a camera image from just AR Camera. See this post:
Unity plugin using OpenGL for Project Tango
Is it possible to control autofocus feature of Android camera, using OpenCV's libnative_camera*.so ?
Or maybe it's possible to manually set focus distance?
Is there alternative approach (May be, it's better to use Android API to control camera and then grab frame in onPreview events and pass it to native code)?
If your intention is to control the camera on your own, then Android Camera APIs suck. As such Android APIs suck when it comes to offering your hardware camera device number to JavaCV native camera library. Without a native device number, JavaCV would be clueless to connect to the appropriate camera (Front or Back).
If your intention is only to perform object detection and stuff, well Android Camera APIs coupled with JavaCV should work. Setup a callbackBuffer of sufficient size, setPreviewCallbackWithBuffer, setup sufficient preview frame-rate, and once you start getting preview-frames in ImageFormat.NV21 format (mind you!!, this is the only format supported for preview-frames even in ICS), pass them off to JavaCV for performing object detection.
AutoFocus on Android Camera APIs suck big time. I have been researching for over a month for feasible solutions.