i just need some guide on how to detect a marker and make an output text.. for ex: a marker with an image of a dog , when detected, i have an output text "DOG" in a textfield .. can someone help me with my idea? oh, btw which one is more effective to use nyartoolkit or andar for my idea?thanks:) need help..!
What you're looking for isn't augmented reality, it's object recognition. AR is chiefly concerned with presenting data overlaid on the the real world, so computation is devoted each frame to determining the position relative to the camera of the object. If you don't intent to use this data, AR libraries may be an inefficient. That said...
AR marker tracking libraries usually find markers by prominent features like corners, and can distinguish markers by binary patters encoded inside the marker, or in the marker's borders. If you're happy with having the "dog" part encoded in the border of a marker, there are libraries you can use like Qualcomm's AR development kit. This library, and Metaio's Unifeye mobile can also do natural feature tracking on pre-defined images. If you're happy with being able to recognize one specific image or images of dogs that you have defined in advance, either of these should be ok. You might have to manipulate your dog images to get good features they can identify and track. Natural objects can be problematic.
General object recognition (being able to recognize a picture of any dog, not known beforehand) is still a research topic. There are approaches, but they're mostly very computationally intensive, and most mobile solutions involve offloading the serious computation to a server. Recognition of simple outline sketches however is more tractable, there's a great paper called "Shape recognition and pose estimation for mobile augmented reality" (I can't find a copy online, but the IEEE link is here) that uses contours to identify objects - this is light enough to run on a mobile (and it's pure genius).
Related
Before I make a giant word dump my effective question is this:
Can I supply some extra information / heuristics to ARCore to refine it's idea of what the pose of a detected augment image is? Or can I use the pose of other trackable objects to refine the the pose of a detected augment image?
For more info, here is some background information on my workflow:
My AR app revolves around overlaying various 3D CAD models on top of their real-world machine equivalents. The user interaction goes like this:
The user will adhere a QR code (sized .2 meters by .2 meters) to a predetermined location on the associated machine (location is specific to the type of machine).
The user will then load up the app, point the camera at the QR code and the app will pass the camera image to a QR code reading library and use the payload (an id for a specific machine) to retrieve the associated CAD Model & metadata.
Once the QR code is detected I can use the QR code reading library to construct a pristine image of the QR code and pass this image to ARCore so that it can detect it in 3D space from the camera.
Once the QR code is detected in 3D space, I attach an anchor and I use the knowledge of where the QR code should be placed on the given model (also retrieved from my database using the payload info) to determine a basis for my CAD model.
Information can be overlayed using the CAD model to show various operations / interactions.
Now I've got all this working pretty well but I've run into some issues where the model is never quite positioned exactly to the real-world equivalent and requires some manual positional adjustment after the fact to get things just right. I have some ideas for how to resolve this but I don't know how feasible any of these ideas are:
ArTrackable_acquireNewAnchor allows you to specify multiple anchors per trackable with different poses. I assume this will refine the tracking the object but I'm not clear as to how to use this API. I'm currently just passing the pose generated from ArAugmentedImage_getCenterPose so I don't know what other poses I would pass.
If I promote my QR Code anchor to a cloud anchor after detection will that aid in detecting / refining the qr pose detection in the future?
If I try and match other features detected by ARCore (like planes) to known topology in the real environment (like floors / walls) could better approximate the position of the QR code image or provide some heuristic to ARCore so that it can?
Instead of using a single QR code image what if I use a set of images (one QR code, and two other static images) that are slightly offset from each other. If we know how far apart these images are in the real world we can use this information to correct for the error in ARCore's estimation of where they are.
Sorry for the giant word dump, but I figured the more info the better. Any other ideas outside the framing of my question are also appreciated.
I have no experience in augmented reality nor image processing. And I know there are lots of document in the internet but to look for right places I should know basic stuff at first. I'm planning to code an android app which will use augmented reality for virtual fitting room. And I have determined some functionalities of app. My question is how could i manage to do those functionalities, which topics should i look into, where to start, which key functionalities app should achieve and which open-source sdk you would suggest. So I can do deeper researches
-- Virtualizing clothes which will be provided by me and make them usable for app
-- Which attributes should virtualized clothes have and how to store them
-- Scan real-life clothes, virtualize them and make usable for app
-- Tracking human who will try on those clothes
-- Human body size can change so clothes which will fit on them should also resized for each person
-- Clothes should be looked as realistic as possible
-- Whenever a person moves, clothes should also move with that person (person bends, clothes also bends and fits on that person). And it should be quick as possible as it gets.
Have you tried Snapchat's face filters?
It's essentially the same problem. They need to:
Create a model of a face (where are the eyes, nose, mouth, chin, etc)
Create a texture to map onto the model of the face
Extract faces from an image/video and map the 2D coordinates from the image to the model of the face you've defined
Draw/Render the texture on top of the image/video feed
Now you'd have to do the same, but instead you'd do it for a human body.
The issues that you'd have to deal with is the fact that only "half" of your body would be visible to your camera at any time (because the other half is facing away from the camera). Also your textures would have to map to a 3D model, vs a relatively 2D model of a face (facial features are mostly on a flat plane which is a good enough estimation).
Good luck!
It seems I've found myself in the deep weeds of the Google Vision API for barcode scanning. Perhaps my mind is a bit fried after looking at all sorts of alternative libraries (ZBar, ZXing, and even some for-cost third party implementations), but I'm having some difficulty finding any information on where I can implement some sort of scan region limiting.
The use case is a pretty simple one: if I'm a user pointing my phone at a box with multiple barcodes of the same type (think shipping labels here), I want to explicitly point some little viewfinder or alignment straight-edge on the screen at exactly the thing I'm trying to capture, without having to worry about anything outside that area of interest giving me some scan results I don't want.
The above case is handled in most other Android libraries I've seen, taking in either a Rect with relative or absolute coordinates, and this is also a part of iOS' AVCapture metadata results system (it uses a relative CGRect, but really the same concept).
I've dug pretty deep into the sample app for the barcode-reader
here, but the implementation is a tad opaque to get anything but the high level implementation details down.
It seems an ugly patch to, on successful detection of a barcode anywhere within the camera's preview frame, to simple no-op on barcodes outside of an area of interest, since the device is still working hard to compute those frames.
Am I missing something very simple and obvious on this one? Any ideas on a way to implement this cleanly, otherwise?
Many thanks for your time in reading through this!
The API currently does not have an option to limit the detection area. But you could crop the preview image before it gets passed into the barcode detector. See here for an outline of how to wrap a detector with your own class:
Mobile Vision API - concatenate new detector object to continue frame processing
You'd implement the "detect" method to take the frame received from the camera, create a cropped version of the frame, and pass that through to the underlying detector.
im new to augmented reality but what is meant by the term marker ? i have done a web search and it says the marker is a place where content will be shown on the mobile device but im not clear still. Here is what i found out so far:
Augmented reality is hidden content, most commonly hidden behind
marker images, that can be included in printed and film media, as long
as the marker is displayed for a suitable length of time, in a steady
position for an application to identify and analyze it. Depending on
the content, the marker may have to remain visible.
There are a couple of types of marker in Vuforia, there are ones you define yourself, after putting them in to they CMS online, ones that you can create at run time and set markers that just have information around the edge. They are where your content will appear. You can see a video here where my business card is the marker and the 3d content is rendered on top. http://youtu.be/MvlHXKOonjI
When the app sees the marker it will work out the pose (position and rotation) of the marker and apply that to any 3d content you want to load, that way as you move around date marker the content stays in the same relative position to the marker.
And one final heads up, this is much easier in Unity 3D than using the iOS or Android native versions. I've done quite a lot and it saves a lot of time.
Marker is nothing but a target for your Augmented Reality app. Whenever you see your marker through the AR CAMERA, the models of your augmented reality app will be shown on!
It will be easy to understand if you develop your first app in augmented reality! :)
TLDR: Augmented Reality markers ~= Google Goggles + Activator
AR markers can be real-world objects or locations that trigger associated actions in your AR system when identified in sight, and usually result in some action, like displaying annotations (à la Rap Genius for objects around you).
Example:
Imagine you are gazing through your AR glasses. (You will see both what is displayed on the lenses, if anything, as well as the world around you.)
As you drive by a series of road cones closing the rightmost lane ahead, the AR software analyzes the scene and identifies several road cones in formation. This pattern is programmed to launch traffic notification software and, in conjunction with your AR's built-in GPS, obtains the latest information regarding what is going on here and what to expect.
In this way, the road cone formation is a marker: something particular and pre-defined triggers some action: obtaining and providing special information regarding your surroundings.
I'm working on a project to recognize insects from user inputted images. I think that OpenCV is the route I'd like to take since I've worked with it before for facial recognition. I'm not using the camera feed and am instead using images provided by the user. For early development I plan to build in some sample images to ensure the concept is working before moving on to other features.
I would like to use 4-5 template images for each insect and have that be robust enough to detect the insect from the input image. If there are multiple insects I would like for them all to be detected and have their own rectangle drawn around them.
With that brief explanation, I am wondering what the best way to complete this task is. I know that OpenCV has template recognition, but the template size matters and I don't want to make the user ensure their insect is a certain amount of pixels in their image. Is there a way to work around this, possibly by rotating the template images or using variously sized templates? Or is there a better approach than template recognition for this project?
Unfortunately without some form of constraints, you are essentially asking if computer vision has been solved! You have several unresolved, but very interesting research problems.
Lets reduce the problem to just classifying a sample insect in a fixed pose with controlled lighting as of belonging to one of 100k insects categories; that would be tough.
Lets reduce the problem to recognizing a single insect instance in an arbitrary pose in 3d space; that would be tough.
Lets reduce the problem to recognizing a single insect instance in the same pose under arbitrary lighting conditions viewed with arbitrary optical sensors, that would be tough.
Successful computer-vision in the wild, is all about cleverly constraining the operating conditions, otherwise you are in research land. If your are in research land, then a cool thing to do is to try and exploit 3D CAD models to capture the huge variety in poses, here's a nice one on recognizing chairs,
http://www.di.ens.fr/willow/research/seeing3Dchairs/,
If not conducting research and, say your building a app, then you need to consider how you can guide the user, train the user, trick the user, into providing the best operating conditions for the recognition system.
(This was to big to put in comments)