Sceneform Collisions With Camera - android

I'm stretching my very limited ARCore knowledge.
My question is similar (but different) to this question
I want to work out if my device camera node intersects/overlaps with my other nodes, but I've not been having any luck so far
I'm trying something like this (the camera is another node):
scene.setOnUpdateListener(frameTime -> {
Node x = scene.overlapTest(scene.getCamera());
if (x != null) {
Log.i(TAG, "setUpArComponents: CAMERA HIT DETECTED at: " + x.getName());
logNodeStatus(x);
}
});
Firstly, does this make sense?
I can detect all node collisions in my scene using:
for (Node node : nodes) {
...
ArrayList<Node> results = scene.overlapTestAll(node);
...
}
Assuming that there isn't a renderable for the Camera node (so no default collision shape), I tried to set my own collision shape, but this was actually catching all the tap events I was trying to perform, so I figured I must be doing this wrong.
I'm thinking about things like fixing a deactivated node in front of the camera.
I may be asking for too much of ARCore, but has anyone found a way to detect a collision between the "user" (i.e. camera node) and another node? Or should I be doing this "collision detection" via indoor positioning instead?
Thanks in advance :)
UPDATE: it's really hacky and performance-heavy, but you can actually compare the camera's and node's world space positions from within onUpdate inside a node, you'll probably have to manage some tolerance and other things to smooth out interactions.

One idea to do the same thing is to use a raycast to hit the objects and if they are close do something. You could use something like this in the onUpdateListener:
Camera camera = arSceneView.getScene().getCamera();
Ray ray = new Ray(camera.getWorldPosition(), camera.getForward());
HitTestResult result = arSceneView.getScene().hitTest(ray);
if (result.getNode() != null && result.getDistance() <= SOME_THRESHOLD) {
// Hit something
doSomething (result.getNode());
}

Related

How to orient GLTF in Android Sceneform level with gravity

I've been getting my butt kicked trying to get a vertically placed 3d model GLB format placed properly on a vertical surface.
Just to be clear, I am not referring to the difficulty of identifying vertical surface, that is a whole other problem in itself.
Removing common boilerplate of setup to minimize this post.
I am using a fragment that extends ARFragment.
class SceneFormARFragment: ArFragment() {
Then of course I have supplied the config with a few tweaks.
override fun getSessionConfiguration(session: Session?): Config {
val config = super.getSessionConfiguration(session)
// By default we are not tracking and tracking is driven by startTracking()
config.planeFindingMode = Config.PlaneFindingMode.DISABLED
config.focusMode = Config.FocusMode.AUTO
return config
}
And to start and stop my AR experience I wrote a couple of methods inside the fragment as follows.
private fun startTracking() = viewScope.launchWhenResumed {
try {
arSceneView.session?.apply {
val changedConfig = config
changedConfig.planeFindingMode = Config.PlaneFindingMode.HORIZONTAL_AND_VERTICAL
configure(changedConfig)
}
logv("startTracking")
planeDiscoveryController.show()
arSceneView.planeRenderer.isVisible = true
arSceneView.cameraStreamRenderPriority = 7
} catch (ex: Exception) {
loge("error starting ar session: ${ex.message}")
}
}
private fun stopTracking() = viewScope.launchWhenResumed {
try {
arSceneView.session?.apply {
val changedConfig = config
changedConfig.planeFindingMode = Config.PlaneFindingMode.DISABLED
configure(changedConfig)
}
logv("stopTracking")
planeDiscoveryController.hide()
arSceneView.planeRenderer.isVisible = false
arSceneView.cameraStreamRenderPriority = 0
} catch (ex: Exception) {
loge("error stopping ar session: ${ex.message}")
}
}
In case you are wondering the reason for "starting and stopping" the AR experience is to maximize the GPU cycles for other UX interactions that are heavy on this overlaid screen, so we wait to start or stop based on current live data state of other things that are happening.
Ok moving on.
Let's review the HitResult handling:
In this method I do a few things:
Load two variations of TV 3d models from the cloud (wall mount and stand mount)
I remove any active models if they have tapped a new area
Create an anchor node from the hitresult and assign it a name to remove it later
Add a TVTransformableNode to it and assign it a name to retrieve and manipulate it later
Determine the look direction of the horizontal stand mount 3D Model TV and set the worldRotation of the anchorNode to the new lookRotation. (NOTE*, I feel like the rotation should be applied to the TVNode, but it only seems to work when I apply it to the AnchorNode for whatever reason.) This camera position math also seems to help the vertical wall mount TV face outwards and anchor correctly. (I have reviewed the GLB models and I know they are properly anchored from the back on the wall model and from the bottom on the floor model)
I then limit the plane movement of the node to it's own respective plane type so that a floor model doesn't slide up to a wall and so that a wall model doesn't slide down to the floor.
That's about it. The horizontal placement works great, but the vertical placement is always randomized.
OnTapArPlane Code below:
private fun onARSurfaceTapped() {
setOnTapArPlaneListener { hitResult, plane, _ ->
var isHorizontal = false
val renderable = when (plane.type) {
Plane.Type.HORIZONTAL_UPWARD_FACING -> {
isHorizontal = true
standmountTVRenderable
}
Plane.Type.VERTICAL -> wallmountTVRenderable
else -> {
activity?.toast("Do you want it to fall on your head really?")
return#setOnTapArPlaneListener
}
}
lastSelectedPlaneOrientation = plane.type
removeActive3DTVModel()
val anchorNode = AnchorNode(hitResult.createAnchor())
anchorNode.name = TV_ANCHOR_NAME
anchorNode.setParent(arSceneView.scene)
val tvNode = TransformableNode(this.transformationSystem)
tvNode.scaleController.isEnabled = false
tvNode.setParent(anchorNode)
tvNode.name = TV_NODE_NAME
tvNode.select()
// Set orientation towards camera
// Ref: https://github.com/google-ar/sceneform-android-sdk/issues/379
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
if(isHorizontal) {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.HORIZONTAL_UPWARD_FACING)
} else {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.VERTICAL)
}
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
tvNode.renderable = renderable
addVideoTo3DModel(renderable)
}
}
Ignore the addvideoTo3dModel call, as that works fine, and I commented it
out just to ensure it doesn't play a role.
Things I've tried.
Extracting Translation without Rotation like described here interestingly enough, it does cause the TV to appear level with the floor each time, but then the TV is always mounted as if the anchor is at the base instead of the center back. So it's bad.
I've tried reviewing various posts and translating Unity or ARCore stuff directly into Sceneform, but failed to get anything to affect the outcome. example
I've tried creating the anchor from the plane and the pose as indicated in this answer with no luck
I've reviewed this link but never found anything useful
I've tracked this issue and tried solutions recommended by people in the thread, but no luck
The last thing I tried, and this is a bit embarrassing lol. I opened all 256 tagged with "SceneForm" in Stack Overflow and reviewed EVERY SINGLE one of them for anything that would help.
So I've exhausted the internet. All I have left is to ask the community and of course send help to SceneForm team at Android which I'm also going to do.
My best guess is that I need to do the Quaternion.axisRotation(Vector3, Float), but everything I have guessed at or trialed and errored has not worked. I assume I need to set the localRotation using worldPostion values for xyz of the phone maybe to help identify gravity. I really just don't know anymore lol.
I know Sceneform is pretty new and the documentation is HORRIBLE and may as well not exist with the lack of content or doc headers on it. The developers must really not want people to use it yet I'm guessing :(.
Last thing I'll say, is everything is working perfectly in my current implementation with the exception of the rotated vertical placement. Just to avoid rabbit trails on this discussion, I'm not having any other issues.
Oh and one last clue that I've noticed.
The TV almost seems to pivot around the center of the vertical plane, based on where I tap, the bottom almost seems to point towards the arbitrary center of the plane, if that helps anyone figure it out.
Oh and yes, I know my textures are missing from the GLBs, I packaged them incorrectly and intend to fix it later.
Screenshots attached.
Well I finally got it. Took awhile and some serious trial and error of rotating every node, axis, angle, and rotation before I finally got it to place nicely. So I'll share my results in case anyone else needs this as well.
End Result looked like:
Of course it is mildly subjective to how you held the phone and it's understanding of the surroundings, but it's always pretty darn close to level now without fail in both landscape and portrait testing that I have done.
So here's what I've learned.
Setting the worldRotation on the anchorNode will help keep the 3DModel facing towards the cameraview using a little subtraction.
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
However, this did not fix the orientation issue on the vertical placement. I found that if i did an X Rotation of 90 degress on the look rotation it worked everytime. It may differ based on your 3d model, but my anchor is center middle back, so I'm not sure how it determine which way was up. However, I noticed whenever I would set a worldRotation on the tvNode it would place the TV level, but would be leaning forward 90 degress. So after playing with the various rotations, I finally got the answer.
val tvRotation = Quaternion.axisAngle(Vector3(1f, 0f, 0f), 90f)
tvNode.worldRotation = tvRotation
That fixed up my problem. So The end Result of the onSurfaceTap and placement was this:
setOnTapArPlaneListener { hitResult, plane, _ ->
var isHorizontal = false
val renderable = when (plane.type) {
Plane.Type.HORIZONTAL_UPWARD_FACING -> {
isHorizontal = true
standmountTVRenderable
}
Plane.Type.VERTICAL -> wallmountTVRenderable
else -> {
activity?.toast("Do you want it to fall on your head really?")
return#setOnTapArPlaneListener
}
}
lastSelectedPlaneOrientation = plane.type
removeActive3DTVModel()
val anchorNode = AnchorNode(hitResult.createAnchor())
anchorNode.name = TV_ANCHOR_NAME
anchorNode.setParent(arSceneView.scene)
val tvNode = TransformableNode(this.transformationSystem)
tvNode.scaleController.isEnabled = false //disable scaling
tvNode.setParent(anchorNode)
tvNode.name = TV_NODE_NAME
tvNode.select()
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
//restrict moving node to active surface orientation
if (isHorizontal) {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.HORIZONTAL_UPWARD_FACING)
} else {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.VERTICAL)
//x 90 degree rotation to flat mount TV vertical with gravity
val tvRotation = Quaternion.axisAngle(Vector3(1f, 0f, 0f), 90f)
tvNode.worldRotation = tvRotation
}
//set anchor nodes world rotation to face the camera view and up
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
tvNode.renderable = renderable
viewModel.updateStateTo(AriaMainViewModel.ARFlowState.REPOSITIONING)
}
This has been tested pretty thoroughly without issues so far in portrait and landscape. I still have other issues with Sceneform, such as the dots only showing up about half the time even when there is a valid surface, and of course vertical detection on a mono color wall is not possible with the current SDK without a picture on the wall or something to distinguish the wall.
Also performing screenshots is not good as it doesn't include the 3D Model so that required custom Pixel Copy work and my screenshots are a bit slow, but at least they work, no thanks to the SDK.
So they have a long ways to go and it's frustrating to blaze the trail with their product and lack of documentation and definitely lack of responsiveness to customer serivce as well as GitHub logged issues, but hey at least I got it, and I hope this helps someone else.
Happy Coding!

Is possible to retrieve the physical (device) camera height over the physical floor?

I'm developing an app using ARCore. In this app I need:
1) to place an object always staying at the same pose in world space. Following the "Working with Anchors" article recommendations (https://developers.google.com/ar/develop/developer-guides/anchors) I'm attaching an anchor to the ARCore Session. That is, I'm not using Trackables at all.
2) as a secondary requisite the object must be placed automatically, that is, without tapping on the screen.
I've managed to solve the two requisites, having now the object "floating" in front of me, this way (very common code):
private void onSceneUpdate(FrameTime frameTime) {
...
if (_renderable!=null && _anchorNode==null) {
float[] position = {0f,0f,-10f};
float[] rotation = {0,0,0,1};
//
Anchor anchor=_arFragment.getArSceneView().getSession().createAnchor(new Pose(position,rotation));
//
_anchorNode = new AnchorNode(anchor);
_anchorNode.setRenderable(_renderable);
_anchorNode.setParent(_arFragment.getArSceneView().getScene());
_anchorNode.setLocalScale(new Vector3(0.01f,0.01f,0.01f)); //cm -> m
...
}
As i want the object to be on the floor, I need to find out what the height of my physical (device) camera above the floor is, in order to subtract that number from the current object's Y coordinate:
float[] position = {0f,HERE_THE_VALUE_TO_SUBTRACT_FROM_CAMERA_HEIGHT,-10f};
Certainly, it's an easy implementation when plane Trackables are used but here I have the requisites above-named.
I've managed to solve the two requisites, having now the object "floating" in front of me.
As i want the object to be on the floor, I need to find out what the height of my physical (device) camera above the floor is, in order to subtract that number from the current object's Y coordinate.
Trying with different camera/device Pose retrieval APIs, namely: frame.getAndroidSensorPose(), frame.getCamera().getPose() and frame.getCamera().getDisplayOrientedPose() are showing not valid values.
Thanks for your advice.
P.S.:Certainly, it's an easy implementation when plane Trackables are used but here I have other requisites, as above-named.
EDIT after Michael Dougan comments.
Well I think we have then two ways to achieve the requisites:
1) leave the code w/o changes, keeping on using the Session Anchor, asking the user to launch the app and the to follow a "calibration process" which the device on the floor. As this is a professional use app, and not a consumer one, we think it is perfectly suitable;
2) go ahead with the good-and-old Trackables, by means of the usual floor as an anchor, including the pose of that anchor in the calculation of the position of the 3D model.

Moving a game object in Arcore

I have been working out on google ARCore, and got stuck on how to move the game object with the inputs coming from the android device.
The canvas that i have created is precisely with 4 buttons, which as AxisTouchButton script from cross platform input covering vertical and horizontal. I have tried out lean touch to scale, translate and rotate seems to works perfectly.But when i am trying to apply force or velocity to the game object, it moves perfectly for the first time, then when i again axis the buttons, it starts to float in that particular direction unless any other button is pressed.
The below code is for the movement of the game object attached to the Andy prefab in HelloAR scene from examples :
Vector3 offset=Vector3.zero;
offset.x = CrossPlatformInputManager.GetAxis("Horizontal");
offset.z= CrossPlatformInputManager.GetAxis("Vertical");
rb.velocity=(offset * speed ) ;
I'm not sure why your prefab is drifting with the code snippet you've provided,
Try resetting the velocity to zero after you are done with movement of prefab.
rb.velocity = new Vector3(0,0,0);
Or maybe it is due to the fact that you are moving the prefab too far away from its parent anchor, or maybe away from the plane detected by arcore.
But I've another tested way to move a prefab using touch input on the planes detected by arcore and as it allows you to move the prefab only on the planes detected so you can easily reset its anchor after you are done with replacing prefab.
I'd modified the HelloARController.cs script in the following way.
bool move = false; //handle move with some button calls
void Update(){
//add this in your update method to call MoveObject() method
//handle move with some buttons
if(move){
MoveObject();
}
}
void MoveObject(){
if(Input.touchCount == 1){
Touch touch = Input.GetTouch(0);
TrackableHit hit;
TrackableHitFlags raycastFilter = TrackableHitFlags.PlaneWithinPolygon | TrackableHitFlags.FeaturePointWithSurfaceNormal;
if (Frame.Raycast (touch.position.x, touch.position.y, raycastFilter, out hit)) {
if ((hit.Trackable is DetectedPlane) && Vector3.Dot (firstPersonCamera.transform.position - hit.Pose.position, hit.Pose.rotation * Vector3.up) < 0) {
Debug.Log ("Hit at back of the current detected plane");
}
else {// KEY CODE SNIPPET : moves the selectedObject at the location of touch on detected planes
selectedObject.transform.position = hit.Pose.position;
}
}
else {
Debug.Log ("Not moving");
}
}
}
here selectedObject is you andy prefab of whatever you are instantiating.
Make sure that you are instantiating only one prefab at a time and refer it to selectedObject.
Try out the new ARCore Manipulation System. Working like a charm (for newbies).
They forgot to add a collider on the prefab, so don't forget to add it before running the example.
ARCore Unity SDK v1.13.0

Unity2D Android Touch misbehaving

I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html

Face detection + 3d model in android

I am using Camera.Face to detect face and min3D to load 3d models.
I want to let the model move with face, but it is not working well.
#Override
public void updateScene() {
if (mFaces == null) {
animeModel.position().x = animeModel.position().y = animeModel
.position().z = 0;
return;
}
for (Face face : mFaces) {
if (face == null) {
continue;
}
animeModel.position().x = face.rect.centerX();
animeModel.position().y = face.rect.centerY();
}
}
Is that model's coordinate and rectangle's coordinate are different systems?
(world coordinates to screen coordinates or something?)
How to solve this?
UPDATE:
I have try to get model's coordinate and face's coordinate.
These two value are totally different.
How to convert face.rect.centerX() to animeModel.position().x?
Here is an article all about how a face tracking demo was developed:
http://www.smallscreendesign.com/2011/02/07/about-face-detection-on-android-%E2%80%93-part-1/
That app is also available on the Play store. Part 1 of the above article has some performance metrics on recognition time. It looks like it may take up to two seconds or more to detect a face.
You could use the code in that article to do your prototyping. You may discover that face detection doesn't happen fast or often enough to track a face in realtime.
Here is the documentation for face tracking on the Android Developer site:
http://developer.android.com/reference/android/hardware/Camera.Face.html
UPDATE:
Check out this library: https://code.google.com/p/asmlib-opencv/

Categories

Resources