I want to create a 3d view (360 degree view) for an object captured using camera like the apps Fyuse or Phogy doing it. I researched on this but did not found something useful to start with.
I have some questions like:
What tool should I use for this e.g unity or Android Studio is enough?
Should I use any sdk (like Rajawali for 3d modeling) and some other tool to accomplish this or can this be implemented without using any third party sdk?
Can this be implemented by capturing a video of object and then extracting its frames and then combining them to show 360 degree view?
Can anyone please guide me on this? Any help is appreciated.
In fact, those apps are not really 3D.
You can get similar results by recording a video together with information from motion / pose sensor so you can assign a cellphone pose to every frame.
Then you can control the playback in respect to actual cellphone rotation.
This project might help you: https://github.com/e-lab/VideoSensors
Related
I am fairly new to Android and especially to the various camera systems in this platform. I'm building an app where I need to integrate ARCore only to track the camera pose (among other things like objects in the scene, planes etc). I don't want to augment anything in the "real world" , so I am not looking to preview the frames being fed to the camera. I've looked through all of the examples in the arcore-sdk and sample code in google's documentation. None of them cover my use case where I want to be able to fetch camera's pose without previewing the camera images onto a surface view or something. I also don't want to 'fake' it by creating a view and hiding it. I would like to know if anyone has experience with such a thing or any ideas how we can achieve it or if we can achieve this at all? Does ARCore even support this?
UPDATE: I found this https://github.com/google-ar/arcore-android-sdk/issues/259 where they mention that it's possible with just an OpenGL context. But I have no clue how to get started. Any samples or pointers would be appreciated!
You can run an ArSession for tracking. ArSession doesn't depend on View.
I'm beginner in android.
I’m working on a project that I'm supposed to convert smart phone movement into mouse movement via smart phone camera with android. The smart phone moves on a checkboard surface and the movement information is sent to computer by Bluetooth. Should I use image processing techniques to do that? Has anyone have a relative experience or a similar code to help me out?
If I understand correctly image processing would be a good way to go to discover movement on a 2d plane. The checkerboard pattern should make for relatively easy pixel image comparison.
You could implement this using object detection in simple way.
But for your method you will need to implement optical flow analysis algorithm.
Optical mice internally uses the similar technique called Digital image correlation, it captures the video frames contentiously and compares consecutive frames to detect the motion.
You should read about optical flow detection techniques on Wikipedia.
& this slide
I want to develop an augmented reality application for android that is capable of using markers to generate 3D objects and these 3D objects are interactive upon touch using the mobile's touch input.
I have browsed through the available SDKs like Vuforia , Junaio or Layar Player and found that they all support:
Marker detection with 3D virtual image overlay
Virtual buttons that get active when you make them invisible. (Vuforia)
Interactive video playback.
However, what I am looking for is:
Virtual object in AR that can be made interactive using mobile's touch.
I am pretty sure it is possible, as there are virtual video overlays that upon clicking/tap would start a video (similar to an interactive virtual element).
Q. Could someone suggest a library/toolkit best suited for this functionality that I'm looking for?
or
Q. Is there something that I apparently missed during my search with the aforementioned toolkits that already support the functionality I want?
According to your last description, what you need is supported by Vuforia, and there is a sample for pure Android (no Unity) as well.
You need to look at the Dominos sample, where they show how to drag an OpenGL domino object on the screen.
Look here for a quick description:
https://developer.vuforia.com/forum/faq/android-how-do-i-project-screen-touch-target
In case you run into some problems while trying to implement it yourself, you can search at the Vuforia forums for some answers to common problems others have faced with this. But basically, it works well in their sample.
Well, this is for Unity 5.x
First, go through Vuforia's Documentation to know more about Image Targets and AR Camera.
Import your 3D models to the scene so that all interactive objects are a child of the image target.
Read touch on mobile phone (I used android for my project)
if(Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began)
Convert touch point into a ray from the screen into the 3D world
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Create a plane in the scene (for the ray to hit)
Plane plane = new Plane(Vector3.up, Vector3.zero);
If, the ray hits the plane, get the x,y,z position. Value of pos will have the world position
if (plane.Raycast(ray, out distance)){
Vector3 pos = ray.GetPoint(distance);
}
Please modify the code according to your need. This is a very basic example.
Regards community
I just want to build a similar app like this,
with my own content of course.
How to capture 360 degree video (cameras, format, ingest, audio)?
Implementation:
2.1 Which one Cardboard SDK works best for my interests (Android or Unity)
2.2 Do you know any blogs, websites, tutorials, samples in which I can support.
Thank you
MovieTextures are a great way to do this in Unity, but unfortunately MovieTextures are not implemented on Android (maybe this will change in Unity 5). See the docs here:
For a simple wrap-a-texture-on-a-sphere app, the Cardboard Java SDK should work. But if you would rather use Unity due to other aspects of the app, the next best way to do this would be to allocate a RenderTexture and then grab the GL id and pass it to a native plugin that you would write.
This native code would be decoding the video stream, and for each frame it would fill the texture. Then Unity can handle the rest of the rendering, as detailed by the previous answer.
First of all, you need content, and to record stereo 360 video, you'll need a rig of at least 12 cameras. Such rigs can be purchased for GoPro cams. That's gonna be expensive.
The recently released Unity 5 is a great option and I strongly suggest using it. The most basic way of doing 360 stereo videos in Unity is to create two spheres with MovieTextures showing your 360 video. Then you turn them "inside out", so that they display their back faces instead of the front faces. This can be done with a simple shader, turning on front face culling and removing the mirror effect. Then you place your cameras inside the spheres. If you are using the google cardboard sdk camera rig, put the spheres on different culling layers and make the cameras only see the appropriate spheres. Remember to put the spheres in proper positions regarding the cameras.
There may be other ways to do this, leading to better results, but they won't be as simple. You can also look for some paid scripts/plugins/assets to do 360 video in VR.
I want to use QR code to get the smart phone's location (either UTM or Lat/Lon). Reading this article, it looks like it is possible to get the position of the smart phone. In addition, I want to render some 3D models on the camera screen. Is it possible? Actually I have no clue from where should I start.
Can anyone help me out regarding this?
Thanks.
If you read that article carefully, all it suggests to get the location of the phone, is to simply encode the lat/lon in the QR code itself. This will only work if the location of the displayed QR codes are fixed (e.g. a sticker on a wall rather than printed on a flyer).
Is it possible to render 3D models on a camera screen? Sure. It wouldn't be the default camera app, you'd have to make your own. It would involve a fair bit of math if you wanted to position the 3D model relative to the QR code. You'd probably try to build planes based on the sides of the squares.