My application's output would be a panorama photo. I looked around for solutions but I couldn't find one which I liked. Almost all of what I saw were an exact copy of a many years old project back on code.google, using GL ES 10 only.
I'd like possible good performance and I also want to avoid reinventing the wheel. Google's Open Spherical API (https://developers.google.com/streetview/open-spherical-camera/) is great, but I need an implementation for a viewer.
I'm wondering if it's possible to reuse StreetViewPanorama's capabilities to implement a 360 viewer leaving out the Google Maps portion. Looks like that getStreetViewPanorama requires a MapView. If not StreetViewPanorama, is there another API I could use?
Or is there some intent I can fire? In that case I should tell though that I periodically would like to update the displayed 360 image.
Related
I am not sure how to achieve an effect like Google's 360 degree StreetView in Android. I have even came across the similar features in commonfloor's app. It has a 360 degree panaromic effect as shown in the link - CommonFloor's Live Tour
I have researched about this and couldn't find much resources. I am not even sure where should I start from? Are there any libraries already available for the same?
Found a library PanaromaGl but it has not been maintained since 2 years. Are there any alternatives before I decide to go for this library?
Thank you in advance
I'm wondering if it's possible to switch to a "navigation view" with the Google Maps API.
By navigation view, I mean the "3D Follower Perspective" one experiences when using Google Maps API to navigate to a certain location.
I do not want to use any navigation functionality, I just want something like this follower perspective.
I've seen this view on a Windows Phone App and it additionally used about every sensor the device offered, such as compass and gyroscope to achieve almost some kind of an augmented reality feeling as the map turned the same way as oneself did.
Is anything comparable available on Android?
This is certainly possible.
Google I/O 2013 had nice demo that shows something similar to what you want to achieve.
Start watching at 21:30: http://www.youtube.com/watch?v=_oZiK_NJuG8
They actually show some code there that you will use.
Since i began programmation this forum provides me all that i always need! i want to thank you for... Thanks for all!!!!
Now i'm here to asking a problem that i not found it yet here.
I'm working on an android application. In my app, i have to read an android panorama which must be on a distant server. I have two problems to do this and hope u to save:
1. I take a panorama from my phone and When i connect it to PC from copying my panorama, this one become a simple jpeg image. I don't know how and why!!!
2. I have no idea on how to view panorama on android. I search on google, on android forums and still at my beginning point, i have to present my application next week!!!
So i give myself to you for bringing me out from this depth.
Thanks.
there's this library that do that with spherical cubic and cylindrical panoramic imagesPanoramaGL
and there's the utility with the google Play Services libs but only for spheric images refer to this :
Android Support for Photo Sphere
For my final year project at university, I am extending an application called Rviz for Android. This is an application for Android tablets that uses ROS (robot operating system) to display information coming from robots. The main intent of the project is essentially the opposite of traditional augmented reality - instead of projecting something digital onto a view of the real world, I am projecting a view of the real world, coming from the tablet's camera, onto a digital view of the world (an abstract map). The intended purpose of the application is that the view of the real world should move on the map as the tablet moves around.
To move the view of the camera feed on screen, I am using the tablet's accelerometer and calculating distance travelled. This is inherently flawed, and as such, the movement is far from accurate (this itself doesn't matter that much - it's great material for my report). To improve the movement of the camera feed, I wish to use markers placed at predefined positions in the real world, with the intent that, if a marker is detected, the view jumps to the position of the marker. Unfortunately, while there are many SDKs out there that deal with marker detection (such as the Qualcomm SDK), they are all geared towards proper augmented reality (that is to say, overlaying something on top of a marker).
So far, the only two frameworks I have identified that could be somewhat useful are OpenCV (which looks very promising indeed, though I'm not very experienced with C++) and AndAR, which again seems very focused on traditional AR uses, but I might be able to modify. Would either of these frameworks be appropriate here? Is there any other way I could implement a solution?
If it helps at all, this is the source code of the application I am extending, and this is the source code for ROS on Android (the code which uses the camera is in the "android_gingerbread_mr1" folder. I can also provide a link to my extensions to Rviz, if that would also help. Thank you very much!
Edit: the main issue I'm having at the moment is trying to integrate the two separate classes which access the camera (JavaCameraView in OpenCV, and CameraPreviewView in ROS). They both need to be active at the same time, but they do different things. I'm sure I can combine them. As previously mentioned, I'll link to/upload the classes in question if needed.
Have a look at the section about Template Matching in the OpenCV documentation. This thread may also be useful.
So I've managed to find a solution to my problem, and it's completely different from what I thought it would be. All of the image processing is offloaded onto a computer, and is performed by a ROS node. This node uses a library called ArUco to detect markers. The markers are generated by a separate program provided with the library, and each has its own unique ID.
When a marker is detected, a ROS message is published containing the marker's ID. The app on the tablet receives the message, and moves the real-world view according to which marker it receives. It works pretty well, though it's a bit unreliable, because I have to use a low image quality to make rendering and transport of the image quicker. And that's my solution! I may post my source code here once the project is completely finished.
Is there an android Framework that can be used in an app to recognize a 3D image and send the user to a video. This should fall under augmented reality, but so far everything I have viewed uses 2D image and stuff to produce a 3D image on the screen... My situation is backwards from that. I tried using vuforia but I couldn't get the sdk to work, and unity needs an android license. DroidAr doesn't seem to fit the bill either. Or are there any tutorials on this matter? Thanks.
I have not used the feature, but Metaio has a 3D object, "markerless" tracking feature as well as the ability to do video playback within the SDK. I am sure if you would rather simply redirect to a video (YouTube) or something this would not be exceptionally difficult.
http://www.metaio.com/software/mobile-sdk/features/
Metaio's mobile SDK is similar to Vuforia, so if you had trouble with that you might have difficulty getting it up and running. If your programming skills aren't up to that, you might consider looking into Junaio, an AR browser made by Metaio. With Junaio you simply create a content channel rather than having to build the app from scratch. Again, I have not actually tried this feature yet but the documentation seems to indicate that 3D tracking is available in Juniao:
http://www.junaio.com/develop/quickstart/3d-tracking-and-junaio/
Good luck!