Im using this code to get the direction of a throw:
throwDirection = -(Camera.main.WorldToScreenPoint(hookedObject.transform.position) -
Input.mousePosition)/(Camera.main.WorldToScreenPoint(hookedObject.transform.position) -
Input.mousePosition).magnitude;
My problem is when i try to translate this to work with touch, it gives me this error:
error CS0121: The call is ambiguous between the following methods or properties: UnityEngine.Vector2.operator -(UnityEngine.Vector2, UnityEngine.Vector2) and UnityEngine.Vector3.operator -(UnityEngine.Vector3, UnityEngine.Vector3)
Here is the touch version of the code, it basically just replaces Input.mousePosition with Input.GetTouch(0).position:
throwDirection = -(Camera.main.WorldToScreenPoint(hookedObject.transform.position) -
Input.GetTouch(0).position)/(Camera.main.WorldToScreenPoint(hookedObject.transform.position)
- Input.GetTouch(0).position).magnitude;
Im confused why this error is occuring. Im using a vector2 for the mouse position, and a vector 2 for the touch positions. Yet this error only occurs on the touch version of the code? It appears twice for everytime i try to use Input.GetTouch(0).position.
When i try to save Input.GetTouch(0).position to a variable the same error occurs. But it only occurs when i use the variable, not when i store the variable.
For instance:
Vector2 touchPos = Input.GetTouch(0).position;
This doesnt give me the error however if i try to use this variable in another statement, the error occurs.
hookedObject.transform.position is Vector3 while Input.GetTouch(0).position is Vector2.
mousePosition is actually also Vector3 which is why that code did work.
It is possible to cast Vector2 to Vector3 and vice versa which is why its ambiguous. Casting touch to Vector3 should fix your problem.
throwDirection = -
(Camera.main.WorldToScreenPoint(hookedObject.transform.position) - (Vector3)
Input.GetTouch(0).position)/(Camera.main.WorldToScreenPoint(hookedObject.transform.position)
- (Vector3)Input.GetTouch(0).position).magnitude;
Input.GetTouch(0).position is a Vector2.
Camera.main.WorldToScreenPoint is a Vector3.
So your Vector3 - Vector2 is impossible. You can substract a Vector3 to another Vector3, or a Vector2 to another Vector2, but not mix them.
You can allocate a new Vector2 taking the Camera.main.WorldToScreenPoint x and y, and then substract your vectors.
Related
I have working and analysing sample hello_ar_java project in android.I want to place a 3d object in a center position of the camera in android.i'm expecting answers android only and i dont want unity answes because i don't know unity.
To place the object in the camera position,
`Frame frame = arSceneView.getArFrame();
float x = frame.getCamera().getPose().qx() ;
float y = frame.getCamera().getPose().qy();
float z = frame.getCamera().getPose().qz() ;
Node andy = new Node();
andy.setParent(arSceneView.getScene().getCamera());
andy.setLocalPosition(new Vector3(position.x, position.y, position.z));
andy.setRenderable(arrowRenderable); // your rendebrable object name`
or instead of `andy.setLocalPosition(new Vector3(position.x, position.y, position.z));`
just give
`andy.setLocalPosition(new Vector3(0f,0,-1f)); // it will place the object in center camera`
The point the camera starts is THE origin. So, just place your object at the origin, and everything should be fine. If you want the object to move with the camera, you should have the camera's transform as the parent of your object's transform.
I can't help you with the java code, but I have done this in Unity and I imagine the concept is the same. What you want to do is create a point in space (a Vector3) that is always in front of the camera. This point will move with the camera and always be in front of it. Then, you want to spawn (instantiate) your object(s) at that Vector3. Let me know if that helps.
I try to run roll a ball game Unity 3d example in android device, The ball is sticking to the sidewalls and also ball is moving very slowly when the ball is in contact with sidewalls. Help me regarding this issue?
Here is my accelerometer code for ball moving
Screen.sleepTimeout = SleepTimeout.NeverSleep;
curAc = Vector3.Lerp(curAc, Input.acceleration-zeroAc, Time.deltaTime/smooth);
GetAxisV = Mathf.Clamp(curAc.y * sensV, -1, 2);
GetAxisH = Mathf.Clamp(curAc.x * sensH, -1, 2);
Vector3 movement = new Vector3 (GetAxisH, 0.0f, GetAxisV);
rigidbody.AddForce(movement * speedAc*2f);
Thanks In Advance
I had a similar problem when building a pinball game. I was not using accelerometer, but the ball behavior was the very same.
Just check out the physic material of yout objects. Ball, walls and either floor has to be checked. As I don't know exactly what kind of game you are building, I recommend you to try out every parameter.
We track a Tablet with Markers and the Optitrack System. Therefore we use a JNI Wrapper to have access on functions of the NatNet SDK. After receiving the Position data on the server we stream(real time) it back to the client- the tablet itself and render an Augmented Reality Scene with the libgdx framework. The Target Platform is Android.
Here is some sample data we are receiving (7 values: x,y,z, qx, qy, qz, qw):
-0,465436 0,888108 -0,991635 0,331507 0,091413 -0,379475 0,858921
-0,438584 0,888583 -0,982334 0,356608 0,092872 -0,364935 0,855002
-0,414451 0,892762 -0,973772 0,365460 0,096244 -0,348293 0,857828
-0,394074 0,900471 -0,963359 0,365230 0,109444 -0,323559 0,865990
The first three values describe the position in the room. We scale the values of the z-axis by the factor of 3 to have a autentic translation from the small values we are receiving to the size we need in our render libgdx scene on the tablet itself and it works fine! The last 4 values are for the rotation of the tracked tablet as an quaternion. This is really new to me as I never worked with quaternion before.
Libgdx supports rotations in a 3D scene with quaternions, but after handing the values over to the concrete modelInstance the rotation is totally unexpected and full of errors. Here is the important code:
//gets the rotation of the latest received rigidbody position data
//and stores them into a libgdx Quaternion
Quaternion rotation = rigidBody.getQuat();
modelInst.transform.rotate(rotation);
...
public Quaterion getQuat(){
float qx = rigidBody.qx;
float qy = rigidBody.qy;
float qz = rigidBody.qy;
float qw = rigidBody.qw;
Quaternion rot = new Quaternion(qx, qy, qz, qw);
return rot;
}
Looks obvious to me so far. But is does not work. When I rotate the tablet it tranlates too and rotates in wrong and unexpected directions without translating the tablet. I've been looking for solutions for a week now. First I tried different permutations of the parameters which I hand over to the libgdx Quaternion class. The rotation values seem to be in a relative(local) form in a right handed coordinate system. At least this is whats written in the official NatNet User Guide on page 12:
NatNet User Guide
. I think libgdx uses absolute Quaternions, but I couldnt figure it out for sure. If this would be the case...how could I transform relative Quaternions to absolute ones? Or maybe it has to do something with our scaling of the z-Axis values?
We appreciate ervery help. Thank you in advance!
I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html
I have written a first person camera class for android.
The class is really simple , the camera object has its three axes
X,y and Z
and there are functions to create the ModelView matrix ( i.e. calculateModelViewMatrix() ),
rotate the camera along its X and Y axis
and Translate the camera along its Z-axis.
I think that my ModelViewMatrix calulation is correct and i can also translate the camera along the Z-axis.
Rotation along x-axis seems to work but along Y-axis it gives strange results.
Also another problem with the rotation seems to be that instead of the camera being rotated, my 3d model starts to rotate instead along its axis.
I have written another implementation based on the look at point and using the openGL ES's GLU.gluLookAt( ) function to obtain the ModelView matrix but that too seems to suffer from the exactly the same problems.
EDIT
First of all thanks for your reply.
I have actually made a second implementation of the Camera class, this time using the rotation functions provided in android.opengl.Matrix class as you said.
I have provided the code below, which is much simpler.
To my surprise, the results are "Exactly" the same.
This means that my rotation functions and Android's rotation functions are producing the same results.
I did a simple test and looked at my data.
I just rotated the LookAt point 1-dgree at a time around Y-axis and looked at the coordinates. It seems that my LookAt point is lagging behind the exact rotation angle e.g. at 20-deg it has only roatated 10 to 12 degree.
And after 45-degrees it starts reversing back
There is a class android.opengl.Matrix which is a collection of static methods which do everything you need on a float[16] you pass in. I highly recommend you use those functions instead of rolling your own. You'd probably want either setLookAtM with the lookat point calculated from your camera angles (using sin, cos as you are doing in your code - I assume you know how to do this.)
-- edit in response to new answer --
(you should probably have edited your original question, by the way - your answer as another question confused me for a bit)
Ok, so here's one way of doing it. This is uncompiled and untested. I decided to build the matrix manually instead; perhaps that'll give a bit more information about what's going on...
class TomCamera {
// These are our inputs - eye position, and the orientation of the camera.
public float mEyeX, mEyeY, mEyeZ; // position
public float mYaw, mPitch, mRoll; // euler angles.
// this is the outputted matrix to pass to OpenGL.
public float mCameraMatrix[] = new float [16];
// convert inputs to outputs.
public void createMatrix() {
// create a camera matrix (YXZ order is pretty standard)
// you may want to negate some of these constant 1s to match expectations.
Matrix.setRotateM(mCameraMatrix, 0, mYaw, 0, 1, 0);
Matrix.rotateM(mCameraMatrix, 0, mPitch, 1, 0, 0);
Matrix.rotateM(mCameraMatrix, 0, mRoll, 0, 0, 1);
Matrix.translateM(mCameraMatrix, 0, -mEyeX, -mEyeY, -mEyeZ);
}
}