Related:
Is there a HTML5/ jQuery Spherical Panorama viewer that works with touch mobile devices
Indirectly related:
Orbiting around the origin using a device's orientation
I am looking for a way to view different parts of a spherical panorama / equirectangular projection by moving the smartphone or tablet (PDA) in space.
Basically just like http://takemetotomorrowland.com/explore/bridgeway-plaza where different parts of the environment can be seen by turning the phone towards its direction in space.
Ideally I like to be able to view different parts of this image by moving the device in space. So turning and moving the device in real space to view the different parts of the projection on the web.
Now I know this is done with WebGL and a spherical panorama viewer like Pannellum but how is the part with moving the device in space done? Is there a library for that or is there a plugin for that?
What is this part of the device technology called that senses movements in 3D space and how can I translate those movements to HTML/JS?
Now I know this is not a specific programming question, I am sorry for that. But I am really looking for a plugin or library or at least find out the name of the sensor responsible for the movements of the device so that I can then work with it.
The feature you are looking for has been inmplemented in DeviceOrientationControls.
See this example.
three.js r.73
Related
Using arcore and/or sceneform, would it be possible to place circles accurately on a real life object. Lets say i had a real world table and a known set of coordinates where small ( 10mm ) AR "stickers" need to be placed. They could be on the top/side/underside of the table and need to be placed accurately to the mm. I am currently solving this problem with a number of fixed mounted lasers. would this be possible to accomplish using arcore on a mobile device - either a phone or AR/smart glasses? Accuracy is critical so how accurate could this solution using arcore be ?
I think you may find that current AR on mobile devices would struggle to meet your requirements.
Partly because, in my experience, there is a certain amount of drift or movement with Anchors, especially when you move the view quickly or leave and come back to a view. Given the technologies available to create and locate anchors, i.e. movement sensors, camera, etc it is natural this will not give consistent millimetre accuracy.
Possibly a bigger issues for you at this time is Occlusion - currently ARcore does not support this. This means that if you place your renderable behind an object it will still be drawn in front of, or on top of, the object as you move away or zoom out.
If you use multiple markers or AR "stickers" your solution will be pretty precise considering your location of your circles will be calculated relative to those markers. Image or marker based tracking is quite impressive with any Augmented Reality SDKs. However, having these markers 10mm can cause problems for detection of markers. I would recommend creating these markers using AugmentedImageDatabase and you can specify real world size of the images which helps for tracking of these images. Then you can check if ARCore can detect your images on the table. ARCore is not the fastest SDK when it comes to detecting images but it can continue tracking even markers are not in the frame. If you need fast detection of markers i would recommend Vuforia SDK.
I would like to create Android app for viewing images.
The idea is that users keep their tablet flat on the table (for sake of simplicity, only X and Y for now) and to scroll the picture (one that is too big to fit the screen) by moving their tablets (yes, this app has to use tablet movement, sorry, no fingers allowed :) ).
I managed to get some basic framework implemented (implementing sensor listeners is easy), but I'm not sure how to translate "LINEAR_ACCELERATION" to pixels. I'm pretty sure it can be done (for example, check "photo sphere" or "panorama" apps that move content exactly as you move your phone) but I can't find any working prototype online.
Where can I see how that kind of "magic" is done in real world?
I tried searching a lot about developing 360 camera like Google Street View but still not able to reach through the solution.
I tried with the this panoramagl-android but this is not what i am looking for.
So can any one please give me idea or suggest anything to create spherical camera application.
360 images and videos are generally created with dedicated cameras or groups of regular cameras, and the result then 'stitched' together to produce the 360 representation.
The usual way to represent a 360 image or video at this time time is an equi-rectangular projection, similar to the technique used to depict the spherical globe on flat maps of the world.
If you are trying to do this with a regular phone you face the issue that you only have one camera, so you won't get the an image from multiple cameras at the same time to stitch together. This is maybe easier to understand visually - this is an example of a set up to capture multiple views:
You then need software to 'stitch' the different videos together. There are quite a few options, many being proprietary, VideoStitch is probably the best known at this time: http://www.video-stitch.com/.
Note that this is processing intensive so it nearly always done on relatively high powered servers rather than on mobile devices.
The requirement is to create an Android application running on one specific mobile device that records video of a human eye pupil dilating in response to a bright light (which is physically attached to the mobile device). The video is then post-processed frame by frame on the device to detect & measure the diameter of the pupil AND the iris in each frame. Note the image processing does NOT need doing in real-time. The end result will be a dataset describing the changes in pupil (& iris) size over time. It's expected that the iris size can be used to enhance confidence in the pupil diameter data (eg removing pupil size data that's wildly wrong), but also as a relative measure for how dilated the eye is at any point.
I am familiar with developing Android mobile apps, but my experience with image processing is very limited. I've researched solutions and it seems that the answer may lie with the OpenCV/JavaCv libraries, which should provide shape detection (eg http://opencvlover.blogspot.co.uk/2012/07/hough-circle-in-javacv.html) but can anyone provide guidance on these specific questions:
Am I right to think it can detect the two circle shapes within a bitmap, one inside the other? ie shapes inside each other is not a problem.
Is it true that JavaCv can detect a circle, and return a position & radius/diameter? ie it doesn't return a set of vertices that then require further processing to compare with a circle? It seems to have a HoughCircle method, so I think yes.
What processing of each frame is typically used before doing shape detection? For example an algorithm to enhance edges, smooth, or remove colour?
Can I use it to not just detect presence of, but measure the diameter of the detected circles? (in pixels, but then can easily be converted to real-world measurements because known hardware is being used). I think yes, but would be great to hear confirmation from those more familiar.
This project is a non-commercial charitable project, so any help especially appreciated.
I would really suggest using ndk as it is a bit richer in features. Also it allows you to run and test your algorithms on a laptop with images before pushing it to a device, speeding up development.
Pre-processing steps:
Typically one would use thresholding or canny edge detection and morphological operations like erode dilate.
For detection of iris / pupil, houghcircles is not a very good method, feature detection methods like MSER work better for not-so-well-defined circles. Here is another answer I wrote on the same topic which has code that could help.
If you are looking to measure the regions, I would suggest going through this blog. It has a clear explanation on the steps involved for a reasonably accurate measurement.
I'm planning on doing an AR application that will just use GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder movements. Its a personal project for my own development, but I'm looking for starting places as its a new field to me so this might be a slightly open ended question with more than 1 right answer. By using GPS I am hoping to simply the development for my first AR application at the cost of its accuracy.
The idea for this AR is not to use any vision processing (relying on GPS only), and to display 3d models on the screen at roughly correct distances (up to a point) from where the user is standing. It sounds simple given games work in a 3D world with a view point and locations of faces/objects/models etc to draw. My target platform will be mobile devices & tablets potentially running one of these OS's WM6, Phone7 or Android.
Most of the applications I have seen use markers and use AR-ToolKit or ARTag, and those that use GPS tend to just display a point of interest or a flat box on a screen to state your at a desired location.
I've done some very limited work with 3D graphics programming, but are there any libraries that you think may be able to get me started on this, rather than building everything from the bottom up. Ignoring the low accuracy of GPS (in regards to AR) I will have a defined point in a 3D space (constantly moving due to GPS fix), and then a defined point in which to render a 3D model in the same 3D space.
I've seen some examples of applications which are similar but nothing which I can expand on, so can anyone suggest places to start of libraries to use that might be suitable for my project.
Sensor-based AR is do-able from scratch without using any libraries. All you're doing is estimating your camera's position in 6DOF using, and then performing a perspective projection which projects the known 3D point onto your camera's focal plane. You define your camera matrix using sensors and GPS, and perform the projection on each new camera frame. If you get this up and running that's plenty sufficient to begin projecting billboards, images etc into the camera frame.
Once you have a pin-hole camera model working you can try to compensate for your camera's wide-angle lens, for lens distortion etc.
For calculating relative distances there's the haversine forumula.
Moving to 3D models will probably be the most difficult part. It can be tricky to introduce camera frames into OpenGL on mobile devices. I don't have any experience on windows mobile or android, so I can't help there.
In any case have fun, it's really nice to see your virtual elements in the world for the first time!