After watching watching this video of AR remote support for HoloLens I decided to try to do something similar but with Android and ARCore. The things was going fine until I try to do a feature shown at 2:01 which is basically getting a "screenshot" of a specific moment, draw or insert objects on it and then convert it in AR Models.
I tried to retain an instance of the Frame however later when I try to simulate a HitTest I receive the following message:
FrameHitTest invoked on old frame, the previous state of the system is no longer available. Returning empty list.
So my question is: is there another approach I can try to simulate a later HitTest or it's not possible using ARCore for now?
...
Hi Pedro,
Have you already tried to put an anchor on the center of the frame when the remote user save the photo and store it's reference somewhere?
With that anchor you can try to generate the content remotely and then send the relative coordinates and the models to client phone with an Anchor ID.
After receiving the data the phone add the augmented content according to the anchor it refers using the previously ID.
You could also add other useful information for you (e.g. camera distance from plane in that specific frame, ...)
Hope this help or give you some hints.
Cheers.
Related
I am writing an Android-app that uses the camera. To make it user-friendly, I'd like to display a message when the picture is too dark or the user has his finger in the lens. Is there any possibility to get the camera-state and decide wether it is covered by something or the camera lens is free?
In order to detect whether the camera is covered by some object or not you will have to use OpenCV library and perform the action accordingly after the object is detected. There is nothing inbuilt in android for the task you want to achieve.
Link to OpenCV
You can use the Camera.PreviewCallback coupled with the Camera class to get a callback with a byte array, that byte array contain the image data of that frame.
Then you'll need some sort of algorithm/logic to determine whether or not it's "too dark". There is nothing built into Android that can help you determine that.
I am trying to display the preview thumbnail when user move his finger over video scrubber.
The only solution I m finding is to extract thumbnails using some 3rd party tool and save it to server or pass it to app via some JSON.
What I m trying to do is something similar to JwPlayer (http://jwplayer.electroteque.org/controls-preview)
Any idea where to start?
Or is here any standard protocol that support manual generated thumbnails? Or i need to go with my own feed format.
I don't quite know what the configuration of your project is, but one possibility is too actually instantiate a mini player and display the progress of the video as the user the slides. So essentially this "mini player" would appear when the user begins drag, and skip to whatever time is specified, and pause. It is similar to a project I am working on now. This is a great reference as well: http://www.autodeskresearch.com/pdf/p1159-matejka.pdf. This technique is much different then the one I suggested, but is another alternative depending on your scenario.
I have gone through all the samples of wikitude. Is it possible to overlay live camera feed image which has been saved as screenshot and create augmenetd image? If it is possible then what tracker image should I use? Because tracker image is the one which I know presently that which image I am going for track. Then if the image will be taken in future how can I create a .wtc file for that and how can I augment my camera feed? Is it possible in wikitude?
I have to create one application using wikitude. I like the sdk of wikitude.
If I understand you correctly you are looking for a way to create target images (that are used for recognition) on the device. This is currently not supported. However if you have a valid business case we are able to provide you with a server based tool to create target images. For more information please contact sales#wikitude.com.
Disclaimer: As you probably already guessed, I'm working for Wikitude.
I'm using a code that uses camera data about an AR marker to calculate the Android device position using AndAR (based on ARToolkit).
The point is that I want to make that analyze while I'm printing an other 3D object render, ho has noting in common with the camera and the AR tag.
Someone has an idea? Thanks!
Responding to my own question, it's not possible to analyze the images within outputing them, because the main activity used by the ARToolkit inherits from AndARActivity, that inherits from activity, so they need a layout.
The solution that I'm using is:
-I implement 2 apps, the first one take images from the camera and make the necessary analyse and the second one use the data from the first one
-I implement a osc transmission for sharing data between apps
-I force the first app to keep runing, soo it keep getting the camera data while the second starts
I think it's not the best option, but I need results and that way it works.
I would like to make an app with a 360 degree product viewer in it.But I would like user to interact with some options along it.How can I achieve it any expertise on it.Thanks.
You can do it this way. Example:
http://jbk404.site50.net/360DegreeView/mobile/common.html
Just copy paste the Source code of the page and the car image sprite, then modify the variables according to your image. After that implement it in android using an Webview, taking it from the assert folder so it will be local and you will not need internet connection.
So if you want a 3D product viewer, you would need to get a hold of 3D models, then you would need to get to know openGL a little better.
Do you have models? If so, i'd suggest you start by getting those.