Draw a line between 2 GPS points using Metaio and Canvas - android

I'm writing here because i'm modifying the GPS based example given by Metaio to try to show a line between 2 GPS points in an AR application. The example works well and i'm able to show an object ( image for example ) in a GPS loc., but when i try to implement Canvas appears many errors, so, my question is, what do i exactly need to draw a line using Metaio and Canvas? If It's impossible...what should i use?. I also have a problem doing the relationship between the screen coordinates and the real coordinates in the space that belongs to the gps points. I found this but i think i need the opposite:
virtual Vector3d metaio::IUnifeyeMobile::get3DPositionFromScreenCoordinates ( int cosID,
const Vector2d & point
) [pure virtual, inherited]
Converts screen coordinates to the corresponding 3D point on the plane of the tracked target.
Parameters:
cosID The (one-based) index of the coordinate system in which the 3D point is defined.
point The 2D screen coordinate to use.
Returns:
A 3D vector containing the coordinates of the resulting 3D point.
Sorry for my bad english and i'll be waiting for answers.
Thank you very much.

You might try asking your question at the Metaio Helpdesk:
http://helpdesk.metaio.com/
The site is watched by Metaio techs and has many Metaio/Junaio developers who participate. You might get an answer there quicker. If you do figure out the problem, be sure to post the solution back here!

Related

How to use the numbers from Game Rotation Vector in Android?

I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRo‌tationMatrixFromVecto‌r converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.

can I draw polygons on texture with GLSL?

I'm completely new to OpenGL so my question might sound stupid but I'm trying to do one thing for more then week and I got completely stuck.
I'm trying to draw a globe that you can rotate and zoom and pick the country on Android. There are no problems with rotation but there are some with zooming (texture looks ugly if camera come closer) and I have no idea how to implement country picking. I have an arrays for each country with lat,log for each vertex of country border. But how can I draw it on the sphere ?
I've been trying to convert lat,long to zyx and draw lines but there is a triangulation issue to build polygons. Is it possible to draw texture on run-time with shaders? and is it possible to get a touch point on texture ? (I'd like to highlight selected country by filling it with another color)
I don't need a map, just a countries with one color, borders with another one and rest of the globe with third one.
I'm using Rajawali lib for this purpose. Although it is opensource but it has no comment and tiny doc is outdated so if you know better framework please suggest me.

Transform Latitude,Longitude-Position on screen in augmented reality app

This is my first post on this forum and I'm very new in programming. I want to build an application where I can see exactly where some gps-values are on my phone. I know a lot of applications, like junaio, mixare and others, but they only show the direction to the objects and they are not very accurate (they don't have the goal to project it on the exact position on screen) - so I want to build it myself. I program in android, but I think it would be the same on iPhone.
I followed the steps suggested from dabhaid :
There are three steps.
1) Determine your position and orientation using sensors.
2) Convert from GPS coordinate space to a planar coordinate space by determining the relative position and bearing of known GPS coordinates using e.g great circle distance and bearing. (your devices stays at the origin of the coordinate space with this scheme)
3) Do a perspective projection http://en.wikipedia.org/wiki/3D_projection#Perspective_projection to figure out where on the plane that is your display (ok, your camera sensor) the objects should appear, so you can augment them.
Step 1: easy, I have the gps-position and all orientations from my mobile device (x,y,z). For further refinements, I can use some algorithm to smooth this values (average, low filter, whatever).
Step 2: I don't know, what is exactly meant by planar coordinate space. I have some different approaches to convert my gps coordinate space. One of them is ECEF (earth centered), where 0,0,0 is the center of the earth. Somehow, this doesn't look good to me, because every little change of ONE axis, results in changes of the other two axis. So if I change the altitude, all of the 3 axis will change. I don't know if I can follow step 3 with this coordinate system.
In step 2 is mentioned: using haversine - this would give me the distance to the point, but I don't get x,y,z from it. Do I have to calculate x,y by using trigometry (bearing (alpha) + distance (hypotenuse)) ?
Step 3: This method looks really cool! If I have my coordinate space from Step 2, I can calculate d_x,d_y,d_z by using the formula on wikipedia. But after this step, I'm not finished yet because i just have the coordinates and for projecting it on my screen, I only need two coordinates? The text from wikipedia is continued by calculating b_x,b_y They use e_x,e_y,e_z which is the viewer's position relative to the display surface -> How can I get these values from my mobile device? (android/ios). Another approach, which is suggested from wikipedia is: Calculating b_x,b_y by by using the formula mentioned on wikipedia. In this formula they use s_x,s_y, which is the screen size and r_x,r_y which is the recording surface size. Again, how can I get the recording surface from my mobile device?
I can't find anything for it on the internet. It seems that nobody on android/ios has ever implemented a perspective projection before...
Thank you very much for all of your answeres! Also, links to useful sites would help!
I think you can find many answers in this other thread: Transform GPS-Points to Screen-Points with Perspective Projection in Android.
Hope it helped, bye!
Here's a simple solution I did on this issue.
A: Mapping GPS locations on the camera preview in Android
Hope it helped. :D

Android:given a current location and lat/long of places arround me how to decide which places are visible in camera?

I am creating AR app for Android which would write name of places/buildings/etc over camera view when I point to places with live camera. I get my current location in lat and long, also I am able to get list of places (with their lat/long) in certain radius from my current location.
However, the most confusing part to implement is to show only those places which are visible in camera in that moment (don't show places). One of idea was to calculate azimuth of my current location, then calculate azimuth of all places which I get in set radius, then calculate camera horizontal angle using getHorizontalViewAngle() and having all this parameters calculate which of places azimuth gets into this interval: [my_current_loc + (getHorizontalViewAngle()/2) ; my_current_loc - (getHorizontalViewAngle()/2)].
However I think it is not very efficient way, can anyone suggest my any solution, or maybe some had similar problem and find good solution. If it is difficult to understand my problem, let me know and I will try to explain in more details.
You are doing the right thing, but in our project we found better (performance wise) to use the rotationmatrix instead of the azimuth. You can take a look at the source code of mixare augmented reality engine. It's on github: https://github.com/mixare/mixare
The core logic is in the MixView class. The main idea is to convert anything to vectors and project them onto a "virtual" sphere that surrounds the phone.
HTH,
Daniele

how to identify the object touched the Android OpenGL ES

The question is simple, how can I identify which object has been touched by the user in OpenGL.
I've tried the utilizat envento onTouchEvent but this only returns the possição X, Y screen.
A similar question was asked (& answered) in this thread:
Detect user's touches over an OpenGL square
Basically there are 2 methods: 1 rendering all objects to a buffer in all different colours and then looking at the colour information at the specified 'pick coordinate' to identify your object.
The other (and I think less resource intensive) is retrieving the 'ray' and then doing a hit test with bounding boxes you provide for all your objects currently rendered on the screen.
edit:
If you're doing your rendering orthographically/2d then this somewhat simplifies things.
You can do a simple hit Test with the point you touched and a rectangle (or circle or polygon perhaps) you provide for the image that you've drawn.
Hope this helps.

Categories

Resources