I would like to make an app with a 360 degree product viewer in it.But I would like user to interact with some options along it.How can I achieve it any expertise on it.Thanks.
You can do it this way. Example:
http://jbk404.site50.net/360DegreeView/mobile/common.html
Just copy paste the Source code of the page and the car image sprite, then modify the variables according to your image. After that implement it in android using an Webview, taking it from the assert folder so it will be local and you will not need internet connection.
So if you want a 3D product viewer, you would need to get a hold of 3D models, then you would need to get to know openGL a little better.
Do you have models? If so, i'd suggest you start by getting those.
Related
I've tried several ways to create android app to classify (with Tensorflow) that image on camera view IS document of my type OR NOT.
I've decided to create my custom model for this classification task.
First I try to use CustomVision helper, I've created 'model.pb' file, train it on 100 correct images of my document and 50 images on this document with mistakes (I know that its very small number of images, but that's all i have at the moment). On the output I have 'model.pb' and 'labels'('correct' and 'invalid') files. I put it in android example (custom vision sample) and it work very sadly: app always says that all I'm seeing in the camera screen (peoples, desks, windows, nature...) - is my CORRECT document label. Only sometimes, if I catch document with wrong stamps in the camera screen I've got INVALID label.
So i've decided to use more complex model and simple re-train it.
I've used this tutorial to get model and train it (Tensorflow for Poets codelab). But the situation is the same: all in camera view detecting such as 'correct' and sometimes (when i put camera on document with wrong angle or not full document) - 'invalid'
SO MY QUESTION IS:
What I'm doing in concept way wrong? Maybe I train models in wrong way? Or maybe tensorflow models couldn't be used to goal os detection documents on screen?
I am developing an app that captures a business card using custom android camera and then i need to autocrop the unwanted space in android and then store the image . I am using opencv for this. All examples i am seeing are in python . I need it in android native.
You can probably try something like this:
1) Get an edge map of the image (perform edge detection)
2) Find contours on the edge map. The outermost contour should correspond to the boundaries of your business card. (under assumption that the business card image is against a solid background) This will help you extract the business card from the image.
3) Once extracted you can store the image separately without the unwanted space.
OpenCV will help you with points 1,2 and 3. Use something like a cannyedge detection for point 1. The findContours function will come in handy for point 2. Point 3 is basic image manipulation which I guess you don't need help with.
This might not be the most precise answer out there - but neither is the question is very precise - so, i guess it is alright.
I want make an Android app with custom camera API, which can take pictures with some png files as frames(Like some web came apps in PCs). And also first I want to take a picture of ball(or something) which act as frame for the second photo that I am going to take. Anybody have an idea?
Most devices already have a camera application, which you can start for the result if that suits your requirement.
But if you have more extensive requirement android also allows you to directly control the camera. Directly controlling the camera is much more involved and you should access your requirement before deciding on either approach.
You can refere to the following develper guides to get details of both
http://developer.android.com/training/camera/photobasics.html
http://developer.android.com/training/camera/cameradirect.html
Once you get the Bitmap, you can use the canvas element to combine the two bitmaps.
I have gone through all the samples of wikitude. Is it possible to overlay live camera feed image which has been saved as screenshot and create augmenetd image? If it is possible then what tracker image should I use? Because tracker image is the one which I know presently that which image I am going for track. Then if the image will be taken in future how can I create a .wtc file for that and how can I augment my camera feed? Is it possible in wikitude?
I have to create one application using wikitude. I like the sdk of wikitude.
If I understand you correctly you are looking for a way to create target images (that are used for recognition) on the device. This is currently not supported. However if you have a valid business case we are able to provide you with a server based tool to create target images. For more information please contact sales#wikitude.com.
Disclaimer: As you probably already guessed, I'm working for Wikitude.
I'm using a code that uses camera data about an AR marker to calculate the Android device position using AndAR (based on ARToolkit).
The point is that I want to make that analyze while I'm printing an other 3D object render, ho has noting in common with the camera and the AR tag.
Someone has an idea? Thanks!
Responding to my own question, it's not possible to analyze the images within outputing them, because the main activity used by the ARToolkit inherits from AndARActivity, that inherits from activity, so they need a layout.
The solution that I'm using is:
-I implement 2 apps, the first one take images from the camera and make the necessary analyse and the second one use the data from the first one
-I implement a osc transmission for sharing data between apps
-I force the first app to keep runing, soo it keep getting the camera data while the second starts
I think it's not the best option, but I need results and that way it works.