is there any way to show obj on glsurfaceview with ARCore but without camera and without implementing whole logic of renderer, I mean only preview of model, and for example after click, show model on camera. For example library that load preview of model and model in AR.
I know that I can use some library for load model to glsurfaceview and ARCore to load model to AR, but I ask if is some library that do all of this.
Prepare two activities. In first one you can make list of models for preview, after taping on the model you can pass some identifier of it to the second activity that will enable the camera and show it in front of the user.
Remember that the fact you`re using ARCore is not forcing you to do it in every single place and activity
I hope this what you had in mind
Related
I am trying to use multiple custom objects to be placed on the face using ARCore SDK. I import the face mesh FBX file provided by Google in the SDK using Blender and place my custom object relative to the face mesh. then I remove the face mesh and export the object as .obj file to be used inside my app.
However, the object is not shown at the position I placed it at relative to the face mesh.
I am using sceneform to render the object on the face.
Any idea what I am doing wrong?
Google Documentation for adding custom objects
I followed the same hierarchy google provided, I left the bones and removed the face mesh and set the main asset as a parent to my object,
but still the object is not placed correctly on the face.
Blender Screenshot
I added a modifier and vertex group as shown in the screenshot. I also reassigned the pivot point of the object to be the same as the face anchor, but sill not shown in the desired position
I don't know if you figured it out, but for me I had the same issue, what I did was adjust my object relative to people's face instead of google's presets, which may be out of place inside blender.
Now that may not be the best approach because it means you are constantly testing and moving the object in blender to then import to the app, but it works for me.
To circunvent this, I made my own guidelines inside blender, for my specific use case, and place new objects relative to this "custom guide".
You might consider this question a sequel to this other one I've recently asked. I have a custom device with a front camera that is mirrored by default and I would like it to always behave like it was flipped horizontally. Note that the end goal of my app is to perform face, barcode and facemask detection (the last one with a custom .tflite model), and I'm trying to understand if it's possible with the current state of CameraX and the quirks of the device I'm using.
For the preview, I can force PreviewView to use a TextureView by calling PreviewView#implementationMode = PreviewView.ImplementationMode.COMPATIBLE so I can just call PreviewView#setScaleX(-1) and it gets displayed like I want.
For taking pictures, I haven't tried yet but I know I can pass setReversedHorizontal(true) to the ImageCapture.OutputFileOptions used in the ImageCapture.Builder, so the image should be saved in reverse.
For image analysis I really don't know. If the input image is taken directly from the preview stream, then it should still be in its default state because PreviewView#setScaleX(-1) only flips the View on screen, right? So in my Analyzer's analyze() I would need to convert the ImageProxy#image to bitmap, then flip it and pass the result to InputImage.fromBitmap().
Is this everything I need? Is there a better way to do this?
I am a beginner in ARCore and I need to display an AR object than can be tapped and can respond with an action (e.g. displaying another activity).
I have tried to do it using examples such as this one - https://creativetech.blog/home/ui-elements-for-arcore-renderable which use sceneform to display UI elements. But sceneform has some disadvantages for my application, and also I do not need plane detection. My questions are:
Can I display a 'tappable' object, a UI element such as button or a textview, but with GLSurfaceView instead of sceneform?
If UI elements cannot bi displayed this way, is it possible to react to a tap on an object displayed on a GLSurfaceView?
Sceneform ha now been 'open sourced and archived' - see the note at (https://developers.google.com/sceneform/develop).
The main example, at this time, for ARCore is OpenGL based and will allow you display an AR object as I think you want.
Have a look here for the overview: https://developers.google.com/ar/develop/java/quickstart
Some of the links to the code appear to be broken at the moment but is available here (look at the 'hello_ar-java' sample: https://github.com/google-ar/arcore-android-sdk/tree/master/samples
I want to recreate AR measurement app using ARCore and OpenGL without sceneform. Is there any way to display text and anchor it to another object like the image below?
Yup, you can see a preview of it here : reference and example.
You can use ViewRenderable Class from AR Core
Your text should be an object like everything that you draw on the screen. Then using sceneform you can place and anchor in reference to another anchor. Check out the solar system example that is provided with sceneform SDK. You have there a perfect example of putting multiple objects in relation each to another and also an example of placing overlays with text.
I am working on an app where I need to use ARCore. Since I don't have any prior experience in ARCore. I don't know how to do it.
REQUIREMENT
I have a requirement where I have to open a camera and place an ARObject on any x,y coordinates.
I know the ARObject needs 'z' as well.
If My camera view shows a shelf with items placed on it, I want to show an ARObject at 3rd item from left. I already have the x,y points of that item on shelf and now just have to place the AR object.
Please let me know if it is even possible or not?
What I Tried
The anchor object. It is not created using x and y coordinates.
I tried the Hello Sceneform example from google. But to use that
I need to calibrate the ARScene first by moving the camera in a
specific manner.
I tried the Augmented Images example from Medium, which lets me add an ARObject/AugmentedImage on the camera without calibrating. But it needs an Augmented Image position as a reference. The item on shelf won't be an AugmentedImage.
Also, the camera view very blur on the ARScene.
Please guide me how can I achieve this. Any tutorial links are welcome