How to orient GLTF in Android Sceneform level with gravity - android

I've been getting my butt kicked trying to get a vertically placed 3d model GLB format placed properly on a vertical surface.
Just to be clear, I am not referring to the difficulty of identifying vertical surface, that is a whole other problem in itself.
Removing common boilerplate of setup to minimize this post.
I am using a fragment that extends ARFragment.
class SceneFormARFragment: ArFragment() {
Then of course I have supplied the config with a few tweaks.
override fun getSessionConfiguration(session: Session?): Config {
val config = super.getSessionConfiguration(session)
// By default we are not tracking and tracking is driven by startTracking()
config.planeFindingMode = Config.PlaneFindingMode.DISABLED
config.focusMode = Config.FocusMode.AUTO
return config
}
And to start and stop my AR experience I wrote a couple of methods inside the fragment as follows.
private fun startTracking() = viewScope.launchWhenResumed {
try {
arSceneView.session?.apply {
val changedConfig = config
changedConfig.planeFindingMode = Config.PlaneFindingMode.HORIZONTAL_AND_VERTICAL
configure(changedConfig)
}
logv("startTracking")
planeDiscoveryController.show()
arSceneView.planeRenderer.isVisible = true
arSceneView.cameraStreamRenderPriority = 7
} catch (ex: Exception) {
loge("error starting ar session: ${ex.message}")
}
}
private fun stopTracking() = viewScope.launchWhenResumed {
try {
arSceneView.session?.apply {
val changedConfig = config
changedConfig.planeFindingMode = Config.PlaneFindingMode.DISABLED
configure(changedConfig)
}
logv("stopTracking")
planeDiscoveryController.hide()
arSceneView.planeRenderer.isVisible = false
arSceneView.cameraStreamRenderPriority = 0
} catch (ex: Exception) {
loge("error stopping ar session: ${ex.message}")
}
}
In case you are wondering the reason for "starting and stopping" the AR experience is to maximize the GPU cycles for other UX interactions that are heavy on this overlaid screen, so we wait to start or stop based on current live data state of other things that are happening.
Ok moving on.
Let's review the HitResult handling:
In this method I do a few things:
Load two variations of TV 3d models from the cloud (wall mount and stand mount)
I remove any active models if they have tapped a new area
Create an anchor node from the hitresult and assign it a name to remove it later
Add a TVTransformableNode to it and assign it a name to retrieve and manipulate it later
Determine the look direction of the horizontal stand mount 3D Model TV and set the worldRotation of the anchorNode to the new lookRotation. (NOTE*, I feel like the rotation should be applied to the TVNode, but it only seems to work when I apply it to the AnchorNode for whatever reason.) This camera position math also seems to help the vertical wall mount TV face outwards and anchor correctly. (I have reviewed the GLB models and I know they are properly anchored from the back on the wall model and from the bottom on the floor model)
I then limit the plane movement of the node to it's own respective plane type so that a floor model doesn't slide up to a wall and so that a wall model doesn't slide down to the floor.
That's about it. The horizontal placement works great, but the vertical placement is always randomized.
OnTapArPlane Code below:
private fun onARSurfaceTapped() {
setOnTapArPlaneListener { hitResult, plane, _ ->
var isHorizontal = false
val renderable = when (plane.type) {
Plane.Type.HORIZONTAL_UPWARD_FACING -> {
isHorizontal = true
standmountTVRenderable
}
Plane.Type.VERTICAL -> wallmountTVRenderable
else -> {
activity?.toast("Do you want it to fall on your head really?")
return#setOnTapArPlaneListener
}
}
lastSelectedPlaneOrientation = plane.type
removeActive3DTVModel()
val anchorNode = AnchorNode(hitResult.createAnchor())
anchorNode.name = TV_ANCHOR_NAME
anchorNode.setParent(arSceneView.scene)
val tvNode = TransformableNode(this.transformationSystem)
tvNode.scaleController.isEnabled = false
tvNode.setParent(anchorNode)
tvNode.name = TV_NODE_NAME
tvNode.select()
// Set orientation towards camera
// Ref: https://github.com/google-ar/sceneform-android-sdk/issues/379
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
if(isHorizontal) {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.HORIZONTAL_UPWARD_FACING)
} else {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.VERTICAL)
}
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
tvNode.renderable = renderable
addVideoTo3DModel(renderable)
}
}
Ignore the addvideoTo3dModel call, as that works fine, and I commented it
out just to ensure it doesn't play a role.
Things I've tried.
Extracting Translation without Rotation like described here interestingly enough, it does cause the TV to appear level with the floor each time, but then the TV is always mounted as if the anchor is at the base instead of the center back. So it's bad.
I've tried reviewing various posts and translating Unity or ARCore stuff directly into Sceneform, but failed to get anything to affect the outcome. example
I've tried creating the anchor from the plane and the pose as indicated in this answer with no luck
I've reviewed this link but never found anything useful
I've tracked this issue and tried solutions recommended by people in the thread, but no luck
The last thing I tried, and this is a bit embarrassing lol. I opened all 256 tagged with "SceneForm" in Stack Overflow and reviewed EVERY SINGLE one of them for anything that would help.
So I've exhausted the internet. All I have left is to ask the community and of course send help to SceneForm team at Android which I'm also going to do.
My best guess is that I need to do the Quaternion.axisRotation(Vector3, Float), but everything I have guessed at or trialed and errored has not worked. I assume I need to set the localRotation using worldPostion values for xyz of the phone maybe to help identify gravity. I really just don't know anymore lol.
I know Sceneform is pretty new and the documentation is HORRIBLE and may as well not exist with the lack of content or doc headers on it. The developers must really not want people to use it yet I'm guessing :(.
Last thing I'll say, is everything is working perfectly in my current implementation with the exception of the rotated vertical placement. Just to avoid rabbit trails on this discussion, I'm not having any other issues.
Oh and one last clue that I've noticed.
The TV almost seems to pivot around the center of the vertical plane, based on where I tap, the bottom almost seems to point towards the arbitrary center of the plane, if that helps anyone figure it out.
Oh and yes, I know my textures are missing from the GLBs, I packaged them incorrectly and intend to fix it later.
Screenshots attached.

Well I finally got it. Took awhile and some serious trial and error of rotating every node, axis, angle, and rotation before I finally got it to place nicely. So I'll share my results in case anyone else needs this as well.
End Result looked like:
Of course it is mildly subjective to how you held the phone and it's understanding of the surroundings, but it's always pretty darn close to level now without fail in both landscape and portrait testing that I have done.
So here's what I've learned.
Setting the worldRotation on the anchorNode will help keep the 3DModel facing towards the cameraview using a little subtraction.
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
However, this did not fix the orientation issue on the vertical placement. I found that if i did an X Rotation of 90 degress on the look rotation it worked everytime. It may differ based on your 3d model, but my anchor is center middle back, so I'm not sure how it determine which way was up. However, I noticed whenever I would set a worldRotation on the tvNode it would place the TV level, but would be leaning forward 90 degress. So after playing with the various rotations, I finally got the answer.
val tvRotation = Quaternion.axisAngle(Vector3(1f, 0f, 0f), 90f)
tvNode.worldRotation = tvRotation
That fixed up my problem. So The end Result of the onSurfaceTap and placement was this:
setOnTapArPlaneListener { hitResult, plane, _ ->
var isHorizontal = false
val renderable = when (plane.type) {
Plane.Type.HORIZONTAL_UPWARD_FACING -> {
isHorizontal = true
standmountTVRenderable
}
Plane.Type.VERTICAL -> wallmountTVRenderable
else -> {
activity?.toast("Do you want it to fall on your head really?")
return#setOnTapArPlaneListener
}
}
lastSelectedPlaneOrientation = plane.type
removeActive3DTVModel()
val anchorNode = AnchorNode(hitResult.createAnchor())
anchorNode.name = TV_ANCHOR_NAME
anchorNode.setParent(arSceneView.scene)
val tvNode = TransformableNode(this.transformationSystem)
tvNode.scaleController.isEnabled = false //disable scaling
tvNode.setParent(anchorNode)
tvNode.name = TV_NODE_NAME
tvNode.select()
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
//restrict moving node to active surface orientation
if (isHorizontal) {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.HORIZONTAL_UPWARD_FACING)
} else {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.VERTICAL)
//x 90 degree rotation to flat mount TV vertical with gravity
val tvRotation = Quaternion.axisAngle(Vector3(1f, 0f, 0f), 90f)
tvNode.worldRotation = tvRotation
}
//set anchor nodes world rotation to face the camera view and up
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
tvNode.renderable = renderable
viewModel.updateStateTo(AriaMainViewModel.ARFlowState.REPOSITIONING)
}
This has been tested pretty thoroughly without issues so far in portrait and landscape. I still have other issues with Sceneform, such as the dots only showing up about half the time even when there is a valid surface, and of course vertical detection on a mono color wall is not possible with the current SDK without a picture on the wall or something to distinguish the wall.
Also performing screenshots is not good as it doesn't include the 3D Model so that required custom Pixel Copy work and my screenshots are a bit slow, but at least they work, no thanks to the SDK.
So they have a long ways to go and it's frustrating to blaze the trail with their product and lack of documentation and definitely lack of responsiveness to customer serivce as well as GitHub logged issues, but hey at least I got it, and I hope this helps someone else.
Happy Coding!

Related

Google Maps CameraPosition Jetpack Compose

When I use the CameraPositionState members isMoving and cameraMoveStartedReason, Google Maps freezes and stops responding. My goal is to do some work after the map is moved by gesture.
val singapore = LatLng(1.35, 103.87)
val cameraPositionState: CameraPositionState = rememberCameraPositionState {
position = CameraPosition.fromLatLngZoom(singapore, 11f)
}
if(cameraPositionState.isMoving &&
cameraPositionState.cameraMoveStartedReason == CameraMoveStartedReason.GESTURE) {
/* TODO */
}
How can I do it better?
Thank you.
Your map freezes because it recomposes while the map moves – which is a LOT btw, you can simply print it in your composable just to check that out. So you're doing some treatment at each recomposition.
You should rather treat it as a side effect instead. The simplest way would be with a LaunchedEffect:
LaunchedEffect(cameraPositionState.isMoving) {
if (cameraPositionState.isMoving && cameraPositionState.cameraMoveStartedReason == CameraMoveStartedReason.GESTURE) {
// Do your work here, it will be done only when the map starts moving from a drag gesture.
}
}
Alternatively, you may want to have a look into derivedStateOf (also a side effect) that could better address that than a LaunchedEffect.
Your map freezes because compose always recreate it on camera position change. I have written an article on this topic, you can check out it and also know how it works:
https://towardsdev.com/jetpack-compose-google-map-camera-movement-listener-erselan-khan-5a9d1b223548
output result:
https://youtu.be/S5LV9bbXtzo

AR Android Sceneform SDK display model only on the floor

I'm using Sceneform SDK for Android version
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'
I need my 3D model to be displayed only on the floor. For example I have a 3D cage (simple transparent cuboid) and I need to place this 3D model over real object. For now if my real object has enough big surface the model will be placed at the top of it instead of go over it and I need to avoid behavior.
Here is some code here.
Logic to init ArFragment and display model at the center of the camera. I'm making a HitTest at the center of the my device camera any time when frame is changed.
private fun initArFragment() {
arFragment.arSceneView.scene.addOnUpdateListener {
arFragment.arSceneView?.let { sceneView ->
sceneView.arFrame?.let { frame ->
if (frame.camera.trackingState == TrackingState.TRACKING) {
val hitTest =
frame.hitTest(sceneView.width / 2f, sceneView.height / 2f)
val hitTestIterator = hitTest.iterator()
if (hitTestIterator.hasNext()) {
val hitResult = hitTestIterator.next()
val anchor = hitResult.createAnchor()
if (anchorNode == null) {
anchorNode = AnchorNode()
anchorNode?.setParent(sceneView.scene)
transformableNode =
DragTransformableNode(arFragment.transformationSystem)
transformableNode?.setParent(anchorNode)
boxNode = createBoxNode(.4f, .6f, .4f) // Creating cuboid
boxNode?.setParent(transformableNode)
}
anchorNode?.anchor?.detach()
anchorNode?.anchor = anchor
}
}
}
}
}
}
I think it's expected behavior because HitTest hits on the surface of real object as well. But Don't know how to avoid this behavior.
Is there a way to ignore real objects and place 3D model always at the floor?
UPDATE
I tried to follow #Mick suggestions. I'm trying to group all HitTestResult. When HitTest is done I get list a of all HitResults for all visible planes. I'm grouping them by its rounded Y axis.
Example
{1.35 -> [Y is 1.36776767, Y is 1.35434343, Y is 1.35999999, Y is 1.37723278]}
{1.40 -> [Y is 1.4121212, Y is 1.403232323, Y is 1.44454545, Y is 1.40001011]}
Then for X and Z anchor points I'm using FIRST HitResult with from array sorted keys from MIN to MAX key in the example it's 1.35.
For Y anchor point I'm getting array of a MIN group elements and get it's average value.
val hitResultList = hitTestIterator.asSequence().toList().groupBy { round(it.hitPose.ty() * 20) / 20 }.minBy { it.key }?.value
val hitResult = hitResultList?.first()!!
val averageValueOfY = hitResultList?.map { it.hitPose.ty() }?.average()
createModel(hitResult, averageValueOfY)
Method to create Model
private fun createModel(newHitResult: HitResult, averageValueOfY: Double) {
try {
val newAnchorPose = newHitResult.createAnchor().pose
anchorNode?.anchor?.detach()
anchorNode?.anchor = arFragment.arSceneView.session?.createAnchor(Pose(floatArrayOf(newAnchorPose.tx(),
averageValueOfY.toFloat(), newAnchorPose.tz()), floatArrayOf(0f, 0f, 0f, 0f)))
isArBagModelRendered = true
transformableNode?.select()
} catch (exception: Exception) {
Timber.d(exception)
}
}
This code update helped to get the behaviour which I tried to achieve, but I noticed that sometimes my Y anchor point is underground it looks like MIN plane was detected under the floor :( and I don't know how to fix this issue for now.
Actually I think you possibly have two separate problems you will face for your use case:
object being placed on the top surface as you have highlighted in the question
Occlusion, or not showing the part of the model that should be hidden behind the object, when the model is actually put in the correct place.
A simple solution to the first problem so you can check the second, or maybe more accurately a workaround, might be to simply get the user to place the object in front of the real object, i.e. the case in your example above, and then move it back until it is exactly where they want it.
If you leave the plane highlighting on, i.e. the grid lines which show where a plane is detected, it may be more intuitive for a user to 'hit' the floor also rather than the top of the object.
This would allow you test quickly if the occlusion issue is actually the more serious issue, before you go too much further.
A more complex solution would be to iterate through the planes and experiment with comparing the 'pose' at the centre of each plane to see if you can find reliable way to decide which is the floor - the method is part of the Plane class:
public Pose getCenterPose ()
Returns the pose of the center of the detected plane. The pose's transformed +Y axis will be point normal out of the plane, with the +X and +Z axes orienting the extents of the bounding rectangle.
There are also methods to get the size of the width or depth of the plane if you were sure the floor will always be the biggest plane:
public float getExtentX ()
Returns the length of this plane's bounding rectangle measured along the local X-axis of the coordinate space centered on the plane.
public float getExtentZ ()
Returns the length of this plane's bounding rectangle measured along the local Z-axis of the coordinate frame centered on the plane.
Unfortunately, I don't think there is any existing handy help function like 'get lowest plane', or get 'largest plane' etc.
Note on the occlusion issue, there are frameworks and libraries that do provide some forms of software based occlusion, i.e. without requiring the device to have extra depth sensors, so it may be worth exploring these a little also.

How to take a picture where all settings are set manually including the flash without missing the image that contains the full flash?

I used the latest Camera2Basic sample program as a source for my trials:
https://github.com/android/camera-samples.git
Basically I configured the CaptureRequest before I call the capture() function in the takePhoto() function like this:
private fun prepareCaptureRequest(captureRequest: CaptureRequest.Builder) {
//set all needed camera settings here
captureRequest.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF)
captureRequest.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.CONTROL_AWB_LOCK, true);
captureRequest.set(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_OFF);
captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AE_LOCK, true);
//captureRequest.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_FAST);
//flash
if (mState == CaptureState.PRECAPTURE){
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF)
}
if (mState == CaptureState.TAKEPICTURE) {
//captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_ALWAYS_FLASH);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
}
val iso = 100
captureRequest.set(CaptureRequest.SENSOR_SENSITIVITY, iso)
val fractionOfASecond = 750.toLong()
captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, 1000.toLong() * 1000.toLong() * 1000.toLong() / fractionOfASecond)
//val exposureTime = 133333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, exposureTime)
//val characteristics = cameraManager.getCameraCharacteristics(cameraId)
//val configs: StreamConfigurationMap? = characteristics[CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP]
//val frameDuration = 33333333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_FRAME_DURATION, frameDuration)
val focusDistanceCm = 20.0.toFloat() //20cm
captureRequest.set(CaptureRequest.LENS_FOCUS_DISTANCE, 100.0f / focusDistanceCm)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CameraMetadata.COLOR_CORRECTION_MODE_FAST)
captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CaptureRequest.COLOR_CORRECTION_MODE_TRANSFORM_MATRIX)
val colorTemp = 8000.toFloat();
val rggb = colorTemperature(colorTemp)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_TRANSFORM, colorTransform);
captureRequest.set(CaptureRequest.COLOR_CORRECTION_GAINS, rggb);
}
but the picture that is returned never is the picture where the flash is at its brightest. This is on a Google Pixel 2 device.
As I only take one picture I am also not sure how to check some CaptureResult states to find the correct one as there is only one.
I already looked at the other solutions to similar problems here but they were either never really solved or somehow took the picture during capture preview which I don't want.
Other strange observations are that on different devices the images are taken (also not always at the right moment), but then the manual values I set are not observed in the JPEG metadata of the image.
If needed I can put my git fork on github.
Long exposure time in combination with flash seems to be the basic issue and when the results are not that good, this means that the timing of your preset isn't that good. You'd have to optimize the exposure time's duration, in relation to the flash's timing (just check the EXIF of some photos for example values). You could measure the luminosity with an ImageAnalysis.Analyzer (this had been removed from the sample application, but elder revisions still have an example). And I've tried with the default Motorola camera app; there the photo also seems to be taken shortly after the flash, when the brightness is already decaying (in order to avoid the dazzling bright). That's the CaptureState.PRECAPTURE, where you switch the flash off. Flashing in two stages is rather the default and this might yield better results.
If you want it to be dazzlingly bright (even if this is generally not desired), you could as well first switch on the torch, that the image, switch off the torch again (I use something alike this, but only for barcode scanning). This would at least prevent any expose/flash timing issues.
When changed values are not represented in EXIF, you'd need to use ExifInterface, in order to update them (there's an example which updates the orientation, but one can update any value).

Android Arcore Plane Tracker Target

I can't find anything on the forums and I would like to manage to create a target on arcore integrated in the plane, like Measure. (Avoids lag with an image centered on the layout)
Do you have a starting point?
I think about that! but don't stay in the center
com.google.ar.sceneform.Camera camera = arFragment.getArSceneView().getScene().getCamera();
MaterialFactory.makeTransparentWithColor(MainActivity.this, new com.google.ar.sceneform.rendering.Color(Color.parseColor("#ff333d")))
.thenAccept(material -> {
nodeRenderable = ShapeFactory.makeSphere(0.008f, new Vector3(camera.getWorldPosition().x, camera.getWorldPosition().y, camera.getWorldPosition().z), material);
What you should do is to create a Node and add it as a child of the scene.
Then on each Node.onUpdate(FrameTime) do these steps
Perform a hitTest from the center of the ArSceneView
Find the first HitResult beeing a Plane and isPoseInPolygon() == true
Update the Node worldPosition and worldRotation to match the hit pose translation and rotation
You can take a look at this Reticle class which is doing exactly that: https://github.com/SimonMarquis/AR-Toolbox/blob/fb31a9cfdf061104a4401cecc9bc73ffa7ad33e6/app/src/main/java/fr/smarquis/ar_toolbox/Settings.kt#L124-L185

Sceneform Collisions With Camera

I'm stretching my very limited ARCore knowledge.
My question is similar (but different) to this question
I want to work out if my device camera node intersects/overlaps with my other nodes, but I've not been having any luck so far
I'm trying something like this (the camera is another node):
scene.setOnUpdateListener(frameTime -> {
Node x = scene.overlapTest(scene.getCamera());
if (x != null) {
Log.i(TAG, "setUpArComponents: CAMERA HIT DETECTED at: " + x.getName());
logNodeStatus(x);
}
});
Firstly, does this make sense?
I can detect all node collisions in my scene using:
for (Node node : nodes) {
...
ArrayList<Node> results = scene.overlapTestAll(node);
...
}
Assuming that there isn't a renderable for the Camera node (so no default collision shape), I tried to set my own collision shape, but this was actually catching all the tap events I was trying to perform, so I figured I must be doing this wrong.
I'm thinking about things like fixing a deactivated node in front of the camera.
I may be asking for too much of ARCore, but has anyone found a way to detect a collision between the "user" (i.e. camera node) and another node? Or should I be doing this "collision detection" via indoor positioning instead?
Thanks in advance :)
UPDATE: it's really hacky and performance-heavy, but you can actually compare the camera's and node's world space positions from within onUpdate inside a node, you'll probably have to manage some tolerance and other things to smooth out interactions.
One idea to do the same thing is to use a raycast to hit the objects and if they are close do something. You could use something like this in the onUpdateListener:
Camera camera = arSceneView.getScene().getCamera();
Ray ray = new Ray(camera.getWorldPosition(), camera.getForward());
HitTestResult result = arSceneView.getScene().hitTest(ray);
if (result.getNode() != null && result.getDistance() <= SOME_THRESHOLD) {
// Hit something
doSomething (result.getNode());
}

Categories

Resources