Is it possible to use the device's front facing camera with ARCore? I see no references in Google's docs.
I can't find any reference in the docs. It would also make the whole AR process much more complicated. You have to invert the logic for moving the camera etc. And it's much harder to recognize planes as the user is always in the way. Right now, ARCore only recognizes planes so you can't detect feature points, e.g. in the face of a user.
The answer is: Yes.
With a front-facing camera (without a depth sensor) on any supported Android device you can track users' face since ARCore 1.7 SDK release. The API allowing you to work with a front-facing camera was named Augmented Faces. So, from now you can create a high quality 468-point 3D mesh that your Android app can overlay on a user’s face to bring fun animated effects.
Related
In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!
I want to apply offsets to both translation and rotation of ArCore's virtual camera pose(displayOrientedCameraPose). Is there any way I can do that ? ArCore's camera only lets me read the current pose and not edit/update the same. Trying to create another virtual camera that will have the Pose with offsets applied doesn't work since a frame can have only one camera.
Unlike many others I have started working with ArCore first with Unity and now moving to Android Studio. In Unity it was quite staright-forward since it supports multiple camera rendering. Wondering if anything similar is possible with Android Studio ?
At the moment ARCore allows you to use only one active ArSession which contains only one ArCamera, i.e. camera in your smartphone. Changing ArCamera's Pose is highly useless because 3D tracking heavily depends on its Pose (every ArFrame stores camera position and rotation as well as all scene's ArAnchors and feature points).
Instead of reposition and reorientation of your ArCamera you can move/rotate the whole ArScene.
Hope this helps.
I have been working on a Unity project which uses a vuforia camera to lock a city layout to an image. However, my main issue is a typical one with image targets in that when there isn't enough detail from the original target in frame the model disappears.
My idea is to use a vuforia camera to identify a specific target, but then lock the model into place using a parallel ARCore camera in order that I can use the orientation sensors of the device to lock the city in position.
I am sure this is possible, as there is an example on the experiments with google that achieves this, but I can't seem to find any technical details on the practicalities. Does anyone know how this can be achieved?
Example here:
https://experiments.withgoogle.com/ar/draw-and-dance
I am trying to build a data logger with machine vision camera function. The most important part from camera2 api is the ability to set focus to infinite to me.
I recently got a Lenovo Phab 2 pro and have been exploring Tango's motion tracking and depth map functions. I would like to record the pose estimation and depth map from Tango alongside my original camera image. Although Tango seems to use only the fisheye camera and time of flight camera for those tasks, I am not able to get any reading for pose and depth map whenever I open the main back facing camera using camera2. I have been doing some research on Tango to see if it allows manual focus control. Unfortunately,I couldn't find anything useful.
My questions are:
Is there a way to get Tango to work while having the back camera controlled by camera2?
If not, is there a way to manually control the main camera's focus using Tango api?
Thank you all very much!
I am developing a simple AR application which renders a 3D image on top of camera view. I could successfully implement that in Windows 7.I used OpenCv's native POSE estimation functions which internally uses POSIT algorithm, so as to give Translation, Rotation matrix, which could be applied on the the 3D modal.
I want to implement the same functionality in an Android application. The problem I am facing is, one of the arguments to the POSE estimation function is the Camera intrinsic and distortion parameters. Which i am not able to find out.
I tried studying various AR platforms - AandAR, ARToolKit etc. But after reverse engineering their Sources, i could get to any conclusion about usage of these in POSE estimation.
Please suggest me an appropriate method for POSE estimation (if it involves camera distortion parameters, then how would i determine it) and hence 3D object rendering over camera view in an Android application
OpenCV is a great library, which gives you the methods to perform camera calibration, if you want to figure out what the intrinsic parameters of your camera are, see Camera calibration documentation. Many of those functionalities are also implemented in the Android OpenCV library.
Finally you can also use the internal sensors Android ships with to obtain Gravity, Linear Acceleration and Orientation. See this inspiring talk by David Sachs on how to use Sensor Fusion on Android.
I was successfully able to perform Camera Calibration for the mobile device. I clicked chessboard snaps using the mobile camera, from various angles,and then used them in C++ Camera calibration code.
Then using these intrinsic and extrinsic camera parameters, I was trying to make the POSE estimation module.
I parsed the XML files and build the matrices.
Then using this OpenCv tutorial - OpenCv POSIT .I was able to perform POSE estimation.
However, I am yet to check its credibility over an actual program. But theoretically its achievable and code doesn't give any build or compilation error.