Panning the view of a gameObject instead of the camera in Unity3d? - android

I'm having a hard time to pan a view of a gameObject in Unity3d. I'm new to scripting and I'm trying to develop an AR (Augmented Reality) application for Android.
I need to have a gameObject (e.g. a model of a floor), from the normal top down view, rendered to a "pseudo" iso view, inclined to 45 degrees. As the gameObject is inclined, I need to have a panning function on its view, utilizing four (4) buttons (for left, right, forward(or up), backward(or down)).
The problem is that, I cannot use any of the known panning script snippets around the forum, as the AR camera has to be static in the scene.
Need to mention that, I need the panning function to be active only at the isometric view, (which I already compute with another script), not on top down view. So there must be no problem with the inclination of the axes of the gameObject, right?
Following, are two mockup images of the states, the gameObject (model floor) is rendered and the script code (from Unity reference), that I'm currently using, which is not very much functional for my needs.
Here is the code snippet, for left movement of the gameObject. I use the same with a change in -, +speed values, for the other movements, but I get it only move up, down, not forth, backwards:
#pragma strict
// The target gameObject.
var target: Transform;
// Speed in units per sec.
var speedLeft: float = -10;
private static var isPanLeft = false;
function FixedUpdate()
{
if(isPanLeft == true)
{
// The step size is equal to speed times frame time.
var step = speedLeft * Time.deltaTime;
// Move model position a step closer to the target.
transform.position = Vector3.MoveTowards(transform.position, target.position, step);
}
}
static function doPanLeft()
{
isPanLeft = !isPanLeft;
}
It would be great, if someone be kind enough to take a look at this post, and make a suggestion on how this functionality can be coded the easiest way, as I'm a newbie?
Furthermore, if a sample code or a tutorial can be provided, it will be appreciated, as I can learn from this, a lot. Thank you all in advance for your time and answers.

If i understand correctly you have a camera with some fixed rotation and position and you have a object you want to move up/down/left/right from the cameras perspective
To rotated an object to a set of angles you simply do
transform.rotation = Quaternion.Euler(45, 45, 45);
Then to move it you use the cameras up/right/forward in worldspace like this to move it up and left
transform.position += camera.transform.up;
transform.position -= camera.transform.right;
If you only have one camera in your scene you can access its transform by Camera.main.transform
An example of how to move it when someone presses the left arrow
if(Input.GetKeyDown(KeyCode.LeftArrow))
{
transform.position -= camera.transform.right;
}

Related

AR Android Sceneform SDK display model only on the floor

I'm using Sceneform SDK for Android version
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'
I need my 3D model to be displayed only on the floor. For example I have a 3D cage (simple transparent cuboid) and I need to place this 3D model over real object. For now if my real object has enough big surface the model will be placed at the top of it instead of go over it and I need to avoid behavior.
Here is some code here.
Logic to init ArFragment and display model at the center of the camera. I'm making a HitTest at the center of the my device camera any time when frame is changed.
private fun initArFragment() {
arFragment.arSceneView.scene.addOnUpdateListener {
arFragment.arSceneView?.let { sceneView ->
sceneView.arFrame?.let { frame ->
if (frame.camera.trackingState == TrackingState.TRACKING) {
val hitTest =
frame.hitTest(sceneView.width / 2f, sceneView.height / 2f)
val hitTestIterator = hitTest.iterator()
if (hitTestIterator.hasNext()) {
val hitResult = hitTestIterator.next()
val anchor = hitResult.createAnchor()
if (anchorNode == null) {
anchorNode = AnchorNode()
anchorNode?.setParent(sceneView.scene)
transformableNode =
DragTransformableNode(arFragment.transformationSystem)
transformableNode?.setParent(anchorNode)
boxNode = createBoxNode(.4f, .6f, .4f) // Creating cuboid
boxNode?.setParent(transformableNode)
}
anchorNode?.anchor?.detach()
anchorNode?.anchor = anchor
}
}
}
}
}
}
I think it's expected behavior because HitTest hits on the surface of real object as well. But Don't know how to avoid this behavior.
Is there a way to ignore real objects and place 3D model always at the floor?
UPDATE
I tried to follow #Mick suggestions. I'm trying to group all HitTestResult. When HitTest is done I get list a of all HitResults for all visible planes. I'm grouping them by its rounded Y axis.
Example
{1.35 -> [Y is 1.36776767, Y is 1.35434343, Y is 1.35999999, Y is 1.37723278]}
{1.40 -> [Y is 1.4121212, Y is 1.403232323, Y is 1.44454545, Y is 1.40001011]}
Then for X and Z anchor points I'm using FIRST HitResult with from array sorted keys from MIN to MAX key in the example it's 1.35.
For Y anchor point I'm getting array of a MIN group elements and get it's average value.
val hitResultList = hitTestIterator.asSequence().toList().groupBy { round(it.hitPose.ty() * 20) / 20 }.minBy { it.key }?.value
val hitResult = hitResultList?.first()!!
val averageValueOfY = hitResultList?.map { it.hitPose.ty() }?.average()
createModel(hitResult, averageValueOfY)
Method to create Model
private fun createModel(newHitResult: HitResult, averageValueOfY: Double) {
try {
val newAnchorPose = newHitResult.createAnchor().pose
anchorNode?.anchor?.detach()
anchorNode?.anchor = arFragment.arSceneView.session?.createAnchor(Pose(floatArrayOf(newAnchorPose.tx(),
averageValueOfY.toFloat(), newAnchorPose.tz()), floatArrayOf(0f, 0f, 0f, 0f)))
isArBagModelRendered = true
transformableNode?.select()
} catch (exception: Exception) {
Timber.d(exception)
}
}
This code update helped to get the behaviour which I tried to achieve, but I noticed that sometimes my Y anchor point is underground it looks like MIN plane was detected under the floor :( and I don't know how to fix this issue for now.
Actually I think you possibly have two separate problems you will face for your use case:
object being placed on the top surface as you have highlighted in the question
Occlusion, or not showing the part of the model that should be hidden behind the object, when the model is actually put in the correct place.
A simple solution to the first problem so you can check the second, or maybe more accurately a workaround, might be to simply get the user to place the object in front of the real object, i.e. the case in your example above, and then move it back until it is exactly where they want it.
If you leave the plane highlighting on, i.e. the grid lines which show where a plane is detected, it may be more intuitive for a user to 'hit' the floor also rather than the top of the object.
This would allow you test quickly if the occlusion issue is actually the more serious issue, before you go too much further.
A more complex solution would be to iterate through the planes and experiment with comparing the 'pose' at the centre of each plane to see if you can find reliable way to decide which is the floor - the method is part of the Plane class:
public Pose getCenterPose ()
Returns the pose of the center of the detected plane. The pose's transformed +Y axis will be point normal out of the plane, with the +X and +Z axes orienting the extents of the bounding rectangle.
There are also methods to get the size of the width or depth of the plane if you were sure the floor will always be the biggest plane:
public float getExtentX ()
Returns the length of this plane's bounding rectangle measured along the local X-axis of the coordinate space centered on the plane.
public float getExtentZ ()
Returns the length of this plane's bounding rectangle measured along the local Z-axis of the coordinate frame centered on the plane.
Unfortunately, I don't think there is any existing handy help function like 'get lowest plane', or get 'largest plane' etc.
Note on the occlusion issue, there are frameworks and libraries that do provide some forms of software based occlusion, i.e. without requiring the device to have extra depth sensors, so it may be worth exploring these a little also.

Changing GameObject Position with Unity and ARToolkit

I'm building an Augmented Reality app with Unity and ARToolkit for Android. I have multiple GameObjects on screen that are children of my marker. Works well. I then created a very simple script to move one of the objects and I attached the script to the game object. It looks like:
void Update()
{
Vector3 currentPos = transform.position;
transform.position = new Vector3(currentPos.x + (.01f * xDirection * xSpeed), currentPos.y + (.01f * yDirection * ySpeed), currentPos.z);
}
The rest of the script does nothing other than alter the value of the direction and speed variables. It works and goes in the directions that I expect, however the object shrinks in size visually. Possible it's just lower on the z axis so it appears smaller, or possible scaling is getting affected. I think it may be related to the movement of the phone up and down while looking at the marker.
I suppose I have to move GameObjects in a different manner than normal when using ARToolkit. What's the proper way?
Thanks
I've no connection with ARToolkit by try checking out their Coordinate System

LibGDX. Android. Black background on transparent DecalSprites

In my App I use several DecalSprites as a part of my scene. They all have transparency (PNG-textures). When I have them overlapping, some of those show black background instead of transparency. Those DecalSprites have different Z-coordinates. So they should look like one behind another.
Please note also the line on the border of a texture. This is also something that I'm struggling to remove.
Update 1: I use PerspectiveCamera in the scene. But all the decals are positioned to face the camera as in 2d mode. So this "black" background appears only in certain cases e.g. when camera goes right (and all those decals appear in the left of the scene). Also I use the CameraGroupStrategy
Solved! The reason was that CameraGroupStrategy when ordering Decals (from farthest to closest to camera) takes the "combined" vector distance between camera and the Decal. When my camera panned to left or to right the distance to the Z-farthest Decal became LESS than the Z-closer Decal. This produced the artifact. Fix:
GroupStrategy strategy = new CameraGroupStrategy(cam , new ZStrategyComparator());
And the Comparator:
private class ZStrategyComparator implements Comparator<Decal> {
#Override
public int compare (Decal o1, Decal o2) {
float dist1 = cam.position.dst(0, 0, o1.getPosition().z);
float dist2 = cam.position.dst(0, 0, o2.getPosition().z);
return (int)Math.signum(dist2 - dist1);
}
}
Thanks to all guys who tried to help. Especially Xoppa. He sent me into the right direction in libGDX IRC.

Unity2D - How to rotate a 2D object on touch/click/press

First of all, I'll let you know that I'm new to Unity and to coding overall (I do know some very basic javascript coding). My question: How can I rotate a 2D object (prefab) 120 degrees on a certain axis (in my case the z-axis, so it rotates like you're looking at a steering wheel) every time I touch on the screen. Right now I have it like this:
function TouchOnScreen ()
{
if (Input.touchCount > 0)
{
var touch = Input.touches[0];
if (touch.position.x < Screen.width/2)
{
transform.rotation = Quaternion.Euler(0,0,120);
Debug.Log("RotateRight");
}
else if (touch.position.x > Screen.width/2)
{
transform.rotation = Quaternion.Euler(0,0,-120);
Debug.Log("RotateLeft");
}
}
}
This code rotates the object whenever I press on a certain side of the screen, but not how I want it to. I want it to rotate so you see the object rotating from A to B, but not (like it is now) in one frame from A to B. Also, this code lets me only rotate one time to each direction.
How can I make it that whenever I press on a certain side of the screen, that it adds or subtracts to/from the previous rotated angle, so I can keep on rotating.
NOTE: Please use javascript, and if you know a simpler code, let me know!
Help is highly appreciated, thanks in advance!
Instead of
transform.rotation = Quaternion.Euler(0,0,-120);
You use:
var lSpeed = 10.0f; // Set this to a value you like
transform.rotation = Quaterion.Lerp ( transform.rotation, Quaternion.Euler(0,0,-120), Time.deltaTime*lSpeed);

Unity2D Android Touch misbehaving

I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html

Categories

Resources