I'm building augmented reality app for android and I'm using jMonkey as my 3D engine.
I want to do simple thing. Move object from left side of screen to right (X axis) by changing the azimuth of view (I got it from compass).
I can calculate where the object is (rendered object has gps location) so I can say am I looking directly or maybe it is on the left or right. Now my problem is smooth move and calculate the change for local translation. My questions are:
1. how can I calculate position in local translation for the object based on azimuth thay I have
2. how make the change of local translation smooth. Now when I change the value from (for example) -4 to -1 the Spatial jumps. I would like to move it smoothly. I've tried to use Cinematic but either it is not for that or I'm not using it properly
About the calculating I've tried something like this
(objectAzimuth - azimuthWhereIlook) / offset
where offset is the scrope of X axis for example if my scope is <-20, 20> the offset is 40
the diffrence
(objectAzimuth - azimuthWhereIlook)
is a proper way of checking where the object is according to two azimuth (I have strong math for that and I've this is working. Based on that I know where the object is (directly, on left, on right).
So I have the point where the object should be on screen but I don't know how to cast it on X axis
Related
I'm working with Android Mobile SDK. How can I set Tilt value for default current position indicator (green circle), so when user change tilt for entire map (by doing two fingers swipe), position indicator follows as well.
Or (if it can't be done with default marker), how can I achive it with custom marker resource)? // I can illustrate it by following screens. On second image circle stays the same, instead of transformed into isometric view itself.
You probably want to use a 3d object model and create a LocalMesh + MapLocalModel Object to display it in 3d.
Either create a flat rectangle and alter the pitch / yaw / roll according to the camera which may look awkward or use a 3d mesh and to follow the camera pitch / yaw /roll to make it look like there is depth to the object.
I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRotationMatrixFromVector converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.
I have been trying to develop a Pedestrian Dead Reckoning application for Android, and after taking care of the step detection and step length components, I have decided to tackle the orientation determination problem.
After stumbling on a couple of posts regarding coordinate transformation (and even chatting with a frequent answerer), I have been getting gradually better results, but the are still some things that bother me.
The experiment:
I walked forward Northward, turned back, and walked back Southward. Repeated the procedure towards West , then East.
Issues:
I expected, while walking straight in several directions, to have
values of the X and Y values oscillate with the footsteps, and
have a relatively stable Z value throughout. Instead, the Y
values behave this way, with the Z value having its expected
behavior. How come? Does it have anything to do with me not
using remapCoordinates()? (see Graph 1)
I expected the angle plots to jump around 180º and -180º, but
why do they also do it around 35º? (see Graph 2)
Notes:
I am using gravity and magnetometer values to compute the rotation
matrix, and multiplying it using OpenGL's multiplyMV();
I am not using remapCoordinates(), because I thought I didn't need to: the
phone is upright in my pocket (Y points up/down, Z usually forward)
and should displace itself 45º forwards backwards and forwards, at
worst;
Azimuth values seem ok, and do not have the oscillation described in issue 2. (see Graph 3)
Graphs:
World Reference Gravity Acceleration
(blue is X, red is Y, green is Z)
World Reference Gravity Angles
(blue is atan2(Y/X), red is atan2(Z/Y) and green is atan2(Z/X) )
Orientation Values
(blue is azimuth, red is pitch and green is roll)
I want to show the movement of a car in a road . I have a textfile containing the positions and I built the movement by updating the position of a car every second .lets say the plain is (200,200) . now what should I do for positions that are outside this screen ? how could I follow my car there ?
should I set up a camera or something?
by the way my app is 2D.
From my experience, there is no actual concept of setting up a camera in 2D programming, but I could be wrong. You'll have to do this yourself, create a camera class etc.....
What I think will end up happening is that the car will stay centered on the screen and everything under it will be moving instead. Depends on what you're trying to achieve.
So if your car is moving northeast at 20 km/h, don't actually move the car, make everything under the car move southwest at 20km/h (or how many pixels per frame this comes out to)
This is if you want to follow the car. If you want to center the "camera" on the car whenever it goes out of bounds you'll probably have to move the landscape and the car towards the center of the screen.
EDIT: I'm assuming that the car will be the main focus?? So it should always be at the center of the screen.
All objects in the game should have a velocity and a position. The position tells you where the object currently is and the velocity tells you how many x's and how many y's it should be moving per frame. So every frame you would say position = position + velocity.
The non-car objects can move off the screen as they wish without having the camera follow them, so let them go. Keep the car centered and adjust all the other objects' velocities based on the car's.
Ex:
Car's velocity (3, 0) ---> means it's moving right in the straight line at 3 pixels per frame
Object 1 velocity (4, 0) ---> means it's also moving right in a straight line but 4 pixels per frame
The velocity of object 1 will have to adjust itself according the the car's velocity. So say:
object1.position = object1.position + (object1.velocity - car.velocity)
Object 1's new velocity is (1, 0), so it's moving faster than the car by one.
If the car gains speed to let's say (5, 0) then object one will appear to be moving backwards by 1.
Whats the best way to create a dart board game on Android. Is it possible to determine where a dart hit and in which area it is using a canvas or do I need to use a diffenrent approach?
Edit
Sorry I dont think I have explained my problem very well.
In svg in html5 I can create a dart board using shapes and assign an id to each shape. When a dart lands on a shape the code knows which shape it has landed on, gets its id and calculates the score. e.g id="20" id="60" id="40" for all 20 areas on a dart board.
How would I do this on the Android?
You've got the right idea. A simple approach would be to display a custom DartBoardView in an activity. The view would use an overridden onDraw to draw a dart board image and any darts the user has "thrown." The view would then use View.onTouchEvent to handle user touch events and translate those into new darts.
New answer based on your update:
You're going to have to do the hit detection manually. A touch event will give you an x-y coordinate with the upper left of the view as origin. Calculating points would go as follows:
Offset the given coords so they line up with the center of your board (i.e. touching the middle of your board would give (100, 150), translate that to (0,0)).
Get the angle of the touch to find which numbered section of the board was hit. Something like double touchAngle = Math.atan2(y,x). Might need to convert from radians to degrees here.
Map the angle to a base point value (if angle > 9 && angle < 27, base points = 7)
Multiply the base point value by 2 or 3 if the distance of the touch (distance = Math.sqrt(x^2 + y^2)) from center is a particular length away to account for the multiplier rings.