How to place a single infinite plane in ARCore? - android

How can I place a plane with infinite length using arcore.
I want to do exactly like this github discussion URL.
The I found solution for unity ARCore sdk : link.
So how can I achieve that using java?

I think you're mixing up two different elements.
in ARCore you have a plane which is mapped surface on which you can put your objects. Examples you've provided are referring to GameObject. You can put as big "plane" as wide/long area you've mapped using your camera. Because first, you need to map the surface. Of course you can put as big GameObject as you want but this is

If you want to treat a detected plane as infinite, you can use Plane.isInExtent to check whether a Pose is in this plane.
If you want to use this in combination with some Sceneform UX functionality, you could create an own version of TranslationController that uses isInExtent instead of isInPolygon (the default way).

Related

Using custom 3D objects with ARCore Augmented Faces

I am trying to use multiple custom objects to be placed on the face using ARCore SDK. I import the face mesh FBX file provided by Google in the SDK using Blender and place my custom object relative to the face mesh. then I remove the face mesh and export the object as .obj file to be used inside my app.
However, the object is not shown at the position I placed it at relative to the face mesh.
I am using sceneform to render the object on the face.
Any idea what I am doing wrong?
Google Documentation for adding custom objects
I followed the same hierarchy google provided, I left the bones and removed the face mesh and set the main asset as a parent to my object,
but still the object is not placed correctly on the face.
Blender Screenshot
I added a modifier and vertex group as shown in the screenshot. I also reassigned the pivot point of the object to be the same as the face anchor, but sill not shown in the desired position
I don't know if you figured it out, but for me I had the same issue, what I did was adjust my object relative to people's face instead of google's presets, which may be out of place inside blender.
Now that may not be the best approach because it means you are constantly testing and moving the object in blender to then import to the app, but it works for me.
To circunvent this, I made my own guidelines inside blender, for my specific use case, and place new objects relative to this "custom guide".

ARCore – Object does not show in correct depth in Face Augmentation

I tried to place an object on face. But does not understand how to set depth in object.
Like when I add 3d object like spects frames on face.
It does not show in correct depth.
When you use Augmented Faces feature, it's worth to note that if any face is detected, ARCore at first puts a Face Anchor (which must be located behind a nose or, more precise to say, inside a skull), and secondly ARCore puts a canonical mask – its pivot point resides on the same place as anchor does.
Hence, if you wanna place your glasses at the appropriate depth – set a pivot point of your 3D object the same way it was set on a canonical mask. In other words – marry these pivot points.
Another way of doing this is getting the canonical face mesh from here
https://github.com/google-ar/arcore-android-sdk/blob/master/assets/canonical_face_mesh.fbx
And as described very nicely here on this blog post by Kristina Simakova:
https://creativetech.blog/home/try-on-glasses-arcore-augmented-faces
In blender you can set whatever model you want at any place on that face mesh,
while also maintaining scale, which is very important.
Also very important is this part:
Following the instructions from ARCore documentation, add the glass
model under “asset“ object. Check out this short tutorial to learn
about parenting in Blender.

Unity Lean touch cannot move object displayed on top of Image UI-> Image

I have displayed a 3d object on top of Ui Image, and applied lean touch library on 3d object for moving, scaling and rotating the object. The issue is I am able to scale and rotate the object but cannot move it. I am missing anything?
Cannot comment yet, so I need to put there as an answer.
In one of my projects, I was using a TouchKit Unity3D tool. It has many features like rotating and moving objects.
to move your 3d object, you need to use LeanTranslate script and add it to your 3d object.

Rendering a frame around a 3d object using opengl

I am rendering a 3d Object using Rajawali and Opengl on android. I want to render a cube around boundaries of the object in exactly the same as shown.
I understand the I may need to use a set of lines or the stack api in rajawali. But I dont understand how can I use the Object3D api to figure out which points consists of the line. I tried using Object3D.getGeometry().getBoundingBox() api but its returning null.
I need hints on how can I derive the min max Number3d points using Rajawali api and whether it would be the right solution to create a set of 12 lines based on those lines. Or should I somehow augment it into the Model after loading so that i can scale around with it?
This may be too obvious but please consider me a beginner. Thanks.
Object3D.getGeometry().getBoundingBox() return undefined value until you call Object3D.getGeometry().computeBoundingBox() method which updates the bounding box information.
The Answer below will give you more information if you are using complex 3d objects with scene hierarchy.
Any way to get a bounding box from a three.js Object3D?
Update:
The best way is to create a separate object with lines using Min/Max values and use the original object's Transformation. Or make the BoundingBox geometry a child for the original object.

To move an image towards 3 dimensions in android application

I want to move an image in in 3 dimensional way in my android application according to my device movement, for this, I am getting my x y z co-ordinate values through sensorEvent,But I am unable to find APIs to move image in 3 dimesions. Could any one please provide a way(any APIs) to get the solution.
Depending on the particulars of your application, you could consider using OpenGL ES for manipulations in three dimensions. A quite common approach then would be to render the image onto a 'quad' (basically a flat surface consisting of two triangles) and manipulate that using matrices you construct based on the accelerometer data.
An alternative might be to look into extending the standard ImageView, which out of the box supports manipulations by 3x3 matrices. For rotation this will be sufficient, but obviously you will need an extra dimension for translation - which you're probably after, seen your remark about 'moving' an image.
If you decide to go with the first suggestion, this example code should be quite useful to start with. You'll probably be able to plug your sensor data straight into that and simply add the required math for the matrix manipulations.

Categories

Resources