I am rendering a 3d Object using Rajawali and Opengl on android. I want to render a cube around boundaries of the object in exactly the same as shown.
I understand the I may need to use a set of lines or the stack api in rajawali. But I dont understand how can I use the Object3D api to figure out which points consists of the line. I tried using Object3D.getGeometry().getBoundingBox() api but its returning null.
I need hints on how can I derive the min max Number3d points using Rajawali api and whether it would be the right solution to create a set of 12 lines based on those lines. Or should I somehow augment it into the Model after loading so that i can scale around with it?
This may be too obvious but please consider me a beginner. Thanks.
Object3D.getGeometry().getBoundingBox() return undefined value until you call Object3D.getGeometry().computeBoundingBox() method which updates the bounding box information.
The Answer below will give you more information if you are using complex 3d objects with scene hierarchy.
Any way to get a bounding box from a three.js Object3D?
Update:
The best way is to create a separate object with lines using Min/Max values and use the original object's Transformation. Or make the BoundingBox geometry a child for the original object.
Related
I tried to place an object on face. But does not understand how to set depth in object.
Like when I add 3d object like spects frames on face.
It does not show in correct depth.
When you use Augmented Faces feature, it's worth to note that if any face is detected, ARCore at first puts a Face Anchor (which must be located behind a nose or, more precise to say, inside a skull), and secondly ARCore puts a canonical mask – its pivot point resides on the same place as anchor does.
Hence, if you wanna place your glasses at the appropriate depth – set a pivot point of your 3D object the same way it was set on a canonical mask. In other words – marry these pivot points.
Another way of doing this is getting the canonical face mesh from here
https://github.com/google-ar/arcore-android-sdk/blob/master/assets/canonical_face_mesh.fbx
And as described very nicely here on this blog post by Kristina Simakova:
https://creativetech.blog/home/try-on-glasses-arcore-augmented-faces
In blender you can set whatever model you want at any place on that face mesh,
while also maintaining scale, which is very important.
Also very important is this part:
Following the instructions from ARCore documentation, add the glass
model under “asset“ object. Check out this short tutorial to learn
about parenting in Blender.
How can I place a plane with infinite length using arcore.
I want to do exactly like this github discussion URL.
The I found solution for unity ARCore sdk : link.
So how can I achieve that using java?
I think you're mixing up two different elements.
in ARCore you have a plane which is mapped surface on which you can put your objects. Examples you've provided are referring to GameObject. You can put as big "plane" as wide/long area you've mapped using your camera. Because first, you need to map the surface. Of course you can put as big GameObject as you want but this is
If you want to treat a detected plane as infinite, you can use Plane.isInExtent to check whether a Pose is in this plane.
If you want to use this in combination with some Sceneform UX functionality, you could create an own version of TranslationController that uses isInExtent instead of isInPolygon (the default way).
The sample project that is available with openCV SDK named "OpenCV Sample - color-blob-detection" identifies the area according to the color of the object you select. It then draws contours around that object. It is possible to extract/highlight that particular area? Since there may also be some other object in the background with the same color, however that is not my desired object.
I know this maybe tricky and involves lot of processing, but some guidance on this will help. How can this be achieved?
Note :-
The reason I am asking this, is later we want to model a temporary 3D object on the selected real time object. So differentiating it from the background objects is necessary.
You should use pointPolygonTest(). In the process() function you should add to mContours only one contour, the one that pointPolygonTest returns true, using the coordinates of the touch.
You will need to pass the coordinates to the process() method.
im new to this android things. And i have to develop an application that can help an autism to learn numbers. I have a few ideas and I've been trying to learn and implement the code. But it's failed. The question is how can i apply the motion code or sprite to draw a numbers or letter? For example like this, i wanna make the penguin move through the line and draw a number nine.
There is example from mybringback.com which is the image move to draw a rectangle. How can i implement it to draw a number? Im sorry if i asking too much, i just trying to get some ideas.
I think that you should first build an utility program, in order to create the "path vector".
What I mean by path vector is simply a vector of Points (where a point has x value, and y value). And your utility should let you draw whatever you want, with a simple pen. You should draw on surface and store points when mouse is down, and ignore points when mouse is up.
Then, in the main program, you will just have to read at the path of your number/letter.
I've tried to implement something like this for the Sugar OLPC platform, without serializing path into files : I was able to draw, and to view the animation. And I used the process I've just described you.
Hope it can help you.
P.S : I used the word mouse, but you guessed that I talk about finger ...
There are various ways to achieve animation effects. One approach that is quite versatile involves creating a custom View or SurfaceView in which you Override the onDraw method. Various tutorials can be found on this; the official Android discussion of it is here:
http://developer.android.com/guide/topics/graphics/2d-graphics.html#on-view
Your implementation will look something like this:
// Find elapsed time since previous draw
// Compute new position of drawable/bitmap along figure
// Draw bitmap in appropriate location
// Add line to buffer containing segments of curve drawn so far
// Render all segments in curve buffer
// Take some action to call for the rendering of the next frame (this may be done in another thread)
Obviously a simplification. For a very simplistic tutorial, see here:
http://www.techrepublic.com/blog/software-engineer/bouncing-a-ball-on-androids-canvas/1733/
Note that different implementations of this technique will require different levels of involvement by you; for example, if you use a SurfaceView, you are in charge of calling the onDraw method, whereas subclassing the normal View lets you leave Android in charge of redrawing (at the expense of limiting your ability to draw on a different thread). In this respect, Google remains your friend =]
I have a 3D cube created using GL_TRIANGLE_STRIP.Is it possible to draw points(using GL_POINTS) or a triangle (using GL_TRIANGLE) on/inside my 3D Cube?How could that be achieved?
If you want to draw something directly of the face of another object (by using the exact same vertex coordinates), you will need to use glPolygonOffset to prevent stitching. There is a chapter in the Red Book that explains it.
If by inside you mean to draw something in the volume of the cube, than there is nothing stopping you. You just need to get the alpha values and blending right to actually see through the cube. Look for some generic tutorial on transparency in OpenGL.
But maybe I'm horribly mistaken and what you are looking for a textures.
If I understand you correctly you could just generate the appropriate texture with the points and apply it to the cube.