I have a client who works on styling cars, he needs an app that lets the user take several pictures of their car and render 1 3d image he can use to look around the car. Is there any way to do this? I have been searching for methods but can't find a solution.
Related
I have no experience in augmented reality nor image processing. And I know there are lots of document in the internet but to look for right places I should know basic stuff at first. I'm planning to code an android app which will use augmented reality for virtual fitting room. And I have determined some functionalities of app. My question is how could i manage to do those functionalities, which topics should i look into, where to start, which key functionalities app should achieve and which open-source sdk you would suggest. So I can do deeper researches
-- Virtualizing clothes which will be provided by me and make them usable for app
-- Which attributes should virtualized clothes have and how to store them
-- Scan real-life clothes, virtualize them and make usable for app
-- Tracking human who will try on those clothes
-- Human body size can change so clothes which will fit on them should also resized for each person
-- Clothes should be looked as realistic as possible
-- Whenever a person moves, clothes should also move with that person (person bends, clothes also bends and fits on that person). And it should be quick as possible as it gets.
Have you tried Snapchat's face filters?
It's essentially the same problem. They need to:
Create a model of a face (where are the eyes, nose, mouth, chin, etc)
Create a texture to map onto the model of the face
Extract faces from an image/video and map the 2D coordinates from the image to the model of the face you've defined
Draw/Render the texture on top of the image/video feed
Now you'd have to do the same, but instead you'd do it for a human body.
The issues that you'd have to deal with is the fact that only "half" of your body would be visible to your camera at any time (because the other half is facing away from the camera). Also your textures would have to map to a 3D model, vs a relatively 2D model of a face (facial features are mostly on a flat plane which is a good enough estimation).
Good luck!
I tried searching a lot about developing 360 camera like Google Street View but still not able to reach through the solution.
I tried with the this panoramagl-android but this is not what i am looking for.
So can any one please give me idea or suggest anything to create spherical camera application.
360 images and videos are generally created with dedicated cameras or groups of regular cameras, and the result then 'stitched' together to produce the 360 representation.
The usual way to represent a 360 image or video at this time time is an equi-rectangular projection, similar to the technique used to depict the spherical globe on flat maps of the world.
If you are trying to do this with a regular phone you face the issue that you only have one camera, so you won't get the an image from multiple cameras at the same time to stitch together. This is maybe easier to understand visually - this is an example of a set up to capture multiple views:
You then need software to 'stitch' the different videos together. There are quite a few options, many being proprietary, VideoStitch is probably the best known at this time: http://www.video-stitch.com/.
Note that this is processing intensive so it nearly always done on relatively high powered servers rather than on mobile devices.
I need to scan a special object within my android application.
I thought about using OpenCV but it is scanning all objects inside the view of the camera. I only need the camera to regognize a rectangular piece of paper.
How can i do that?
My first thought was: How do barcode scanners work? They are able to regognize the barcode area and automatically take a picture when the barcode is inside a predefined area of the screen and when its sharp. I guess it must be possible to transfer that to my problem (tell me if im wrong).
So step by step:
Open custom camera application
Scan objects inside the view of the camera
Recognize the rectangular piece of paper
If paper is inside a predefined area and sharp -> take a picture
I would combine this with audio. If the camera recognized the paper make some noice like a peep or something and the more the object is fitting the predefined area the faster the peep sound is played. That would make taking pictures for blind people possible.
Hope someone got ideas on that.
OpenCV is an image processing framework/library. It does not "scan all objects inside the view of the camera". By itself it does nothing and yet it gives the use of a number of useful functions, many of which could be used for your specified application.
If the image is not cluttered and nothing is on the paper, I would look into using edge detection (i.e. Canny or similar) or even colour blobs (even though colour is never a good idea, if your application is always for white uncovered paper, it should work robustly).
OpenCV does add some overhead, but it would allow you to quickly use functions for a simple solution.
I am working on a text RPG game for Android, and would like to create some kind of an area map for the player's reference. I Googled it but was directed to Tiled. I am not trying to create a playable level map, just a basic map that the player can open to see areas explored so far for easier navigation. I would probably create these graphics myself using Fireworks.
The problem is a phone screen is only so big, and I need to somehow allow the player to swipe left/right/up/down to see all parts of the map. I guess on the biggest tablets perhaps the whole map would fit, but I need to support smaller resolutions as well.
So basically I think I need a way to display the map image in an activity (which I know how to do, duh), and allow the user to scroll around it because it will be way bigger than the screen.
i just need some guide on how to detect a marker and make an output text.. for ex: a marker with an image of a dog , when detected, i have an output text "DOG" in a textfield .. can someone help me with my idea? oh, btw which one is more effective to use nyartoolkit or andar for my idea?thanks:) need help..!
What you're looking for isn't augmented reality, it's object recognition. AR is chiefly concerned with presenting data overlaid on the the real world, so computation is devoted each frame to determining the position relative to the camera of the object. If you don't intent to use this data, AR libraries may be an inefficient. That said...
AR marker tracking libraries usually find markers by prominent features like corners, and can distinguish markers by binary patters encoded inside the marker, or in the marker's borders. If you're happy with having the "dog" part encoded in the border of a marker, there are libraries you can use like Qualcomm's AR development kit. This library, and Metaio's Unifeye mobile can also do natural feature tracking on pre-defined images. If you're happy with being able to recognize one specific image or images of dogs that you have defined in advance, either of these should be ok. You might have to manipulate your dog images to get good features they can identify and track. Natural objects can be problematic.
General object recognition (being able to recognize a picture of any dog, not known beforehand) is still a research topic. There are approaches, but they're mostly very computationally intensive, and most mobile solutions involve offloading the serious computation to a server. Recognition of simple outline sketches however is more tractable, there's a great paper called "Shape recognition and pose estimation for mobile augmented reality" (I can't find a copy online, but the IEEE link is here) that uses contours to identify objects - this is light enough to run on a mobile (and it's pure genius).