How to show location markers on Camera view of device - android

I am looking for logic to show various location markers based on area visible through camera view of a device on android.
Something similar to attached image.
I know following things
Get location
I have all locations to be marked
I know how to create an overlay on camera view

You can check my tutorial: https://www.netguru.co/blog/augmented-reality-mobile-android
I've described step by step what you need to do to achieve similar outcome but very simplified. If you have any questions I'm ready to help.

Each location need to have it own REAL position. For example you need a database of GPS location of each point.
Than you analyze frames from your camera. In each analyzed frame you're checking the azimuth on which you are currently looking. Let say you`re looking straight north, azimuth 0 degrees. If we assume that your camera has 90 degree field of view (FOV) than you know that your FOV is from <45, 0) to <0, 315> (in terms of azimuths)
Now you need to check which of your points are on such azimuth and if one of them is you just display them

Augmented reality is helpful for your problem.

After searching a lot found following things
An SDK which is open
https://artoolkit.org/documentation/doku.php?id=1_Getting_Started:about_installing
and these examples
https://github.com/tvbarthel/ChaseWhisplyProject
https://code.tutsplus.com/tutorials/android-sdk-augmented-reality-location-distance--mobile-8004
Hope this helps someone else as well.

Check this framework https://www.layar.com/. Theres 2 ways to plot your images on view. The first one is, through GPS pre-marked; another one is through image recognition (you can use a simple QRcode cards, I guess it's the most simple. But if your motivated to create a awesome solution, do a grateful solution).
Regards.

Related

Android - Get touch point coordinates relative to zoomed image

I'm looking for the best way(s) of getting the position of a point I touched on an image and get it's coordinates relative to the image size.
For example, I get, as the parent image, the map of a building floor. And the user need to be able to zoom in and point on the map a point that will be registered in a database in order to be retreived later.
I never really had to work with images and canvases on Android so I'm a bit confused. What might be the best approach. I already found a lot of document but couldn't find one that would fit the idea I need to develop.
Thanks in advance for you help,
Matthieu

Transform Latitude,Longitude-Position on screen in augmented reality app

This is my first post on this forum and I'm very new in programming. I want to build an application where I can see exactly where some gps-values are on my phone. I know a lot of applications, like junaio, mixare and others, but they only show the direction to the objects and they are not very accurate (they don't have the goal to project it on the exact position on screen) - so I want to build it myself. I program in android, but I think it would be the same on iPhone.
I followed the steps suggested from dabhaid :
There are three steps.
1) Determine your position and orientation using sensors.
2) Convert from GPS coordinate space to a planar coordinate space by determining the relative position and bearing of known GPS coordinates using e.g great circle distance and bearing. (your devices stays at the origin of the coordinate space with this scheme)
3) Do a perspective projection http://en.wikipedia.org/wiki/3D_projection#Perspective_projection to figure out where on the plane that is your display (ok, your camera sensor) the objects should appear, so you can augment them.
Step 1: easy, I have the gps-position and all orientations from my mobile device (x,y,z). For further refinements, I can use some algorithm to smooth this values (average, low filter, whatever).
Step 2: I don't know, what is exactly meant by planar coordinate space. I have some different approaches to convert my gps coordinate space. One of them is ECEF (earth centered), where 0,0,0 is the center of the earth. Somehow, this doesn't look good to me, because every little change of ONE axis, results in changes of the other two axis. So if I change the altitude, all of the 3 axis will change. I don't know if I can follow step 3 with this coordinate system.
In step 2 is mentioned: using haversine - this would give me the distance to the point, but I don't get x,y,z from it. Do I have to calculate x,y by using trigometry (bearing (alpha) + distance (hypotenuse)) ?
Step 3: This method looks really cool! If I have my coordinate space from Step 2, I can calculate d_x,d_y,d_z by using the formula on wikipedia. But after this step, I'm not finished yet because i just have the coordinates and for projecting it on my screen, I only need two coordinates? The text from wikipedia is continued by calculating b_x,b_y They use e_x,e_y,e_z which is the viewer's position relative to the display surface -> How can I get these values from my mobile device? (android/ios). Another approach, which is suggested from wikipedia is: Calculating b_x,b_y by by using the formula mentioned on wikipedia. In this formula they use s_x,s_y, which is the screen size and r_x,r_y which is the recording surface size. Again, how can I get the recording surface from my mobile device?
I can't find anything for it on the internet. It seems that nobody on android/ios has ever implemented a perspective projection before...
Thank you very much for all of your answeres! Also, links to useful sites would help!
I think you can find many answers in this other thread: Transform GPS-Points to Screen-Points with Perspective Projection in Android.
Hope it helped, bye!
Here's a simple solution I did on this issue.
A: Mapping GPS locations on the camera preview in Android
Hope it helped. :D

Rectify a rectangle to get area

I need to get the area of a known object inside a scene to get the distance from that. The problem is rectifying it so that the area is independent from the angle.
I'm using opencv (on Android) with some java code that is equivalent to this:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography
In other words: how do i get the area of the object observed perpendicularly from that distance given the H matrix.
Thank you in advance and sorry for my poor english... :)
You can call cvCalibrateCamera, but am not sure if it works with one image only. The algorithm it is based upon can cope with the one image case, see section 3.1. where it says "if n=1...". So in a pinch you can re-implement it.

Android:given a current location and lat/long of places arround me how to decide which places are visible in camera?

I am creating AR app for Android which would write name of places/buildings/etc over camera view when I point to places with live camera. I get my current location in lat and long, also I am able to get list of places (with their lat/long) in certain radius from my current location.
However, the most confusing part to implement is to show only those places which are visible in camera in that moment (don't show places). One of idea was to calculate azimuth of my current location, then calculate azimuth of all places which I get in set radius, then calculate camera horizontal angle using getHorizontalViewAngle() and having all this parameters calculate which of places azimuth gets into this interval: [my_current_loc + (getHorizontalViewAngle()/2) ; my_current_loc - (getHorizontalViewAngle()/2)].
However I think it is not very efficient way, can anyone suggest my any solution, or maybe some had similar problem and find good solution. If it is difficult to understand my problem, let me know and I will try to explain in more details.
You are doing the right thing, but in our project we found better (performance wise) to use the rotationmatrix instead of the azimuth. You can take a look at the source code of mixare augmented reality engine. It's on github: https://github.com/mixare/mixare
The core logic is in the MixView class. The main idea is to convert anything to vectors and project them onto a "virtual" sphere that surrounds the phone.
HTH,
Daniele

Android Speedometer (Needle Gauge)

Creating a simple app that calculates the speed your going and displays it in a speedometer graphic. I can do all the speed calculations, gps calculations etc.. but i am not too sure about the animation. Does anyone have any good tutorials or examples on needle gauges other than the thermometer example out there?
I know, the post is quite old. But I had the same situation: there is no a good control for representing speed. I guess many people are facing this.
I've implemented SpeedometerView myself: a simple speedometer with needle and colored value ranges. Feel free to download!
https://github.com/ntoskrnl/SpeedometerView
This control was used in my app CardioMood.
The code is not optimized, but works. Enjoy!
Check here. It is my code. If you have any question please let me know.
Visit https://github.com/mucahitsidimi/GaugeView
You can implement Gauge to your project simply.
<com.sidimi.mucahit.gaugeview.GaugeView
android:id="#+id/gaugeView"
android:layout_width="fill_parent"
android:layout_height="fill_parent"/>
You can set width and height what ever you want to. It will calculate everything automatically.
Check the solution i found for my case.
Big thanks to the owner Evelina Vrabie...
You could probably start with something like this.
Then when transitioning between values, do an animation where the needle gradually moves to the next value X units per unit of time.
This question is also very similar to yours.

Categories

Resources