How to place object without surface detection in ARCore? - android

I want to place some object to real world coordinate.
Is it possible? without surface detection?

Yes, you can place relative to another point, for example the position of the camera, i.e. the centre of the current view.
See this example, which uses the original, now depreciated Sceneform: https://stackoverflow.com/a/56681538/334402
Sceneform is now being updated and maintained here and the same basic approach will still work: https://github.com/thomasgorisse/sceneform-android-sdk

It is possible but not recommended as anchors are supposed to be attached to a trackable surface. What you can do is shoot a ray from your camera and place an anchor where the ray intersects a Trackable plane. You can set your app to do this a set number of seconds after the user starts the app (make sure to let the user know they need to look around to help in plane detection)

Related

Rendering 3d object on specified point without using plane detection in Arcore(sceneform)

I am working on a Augmented reality application where I have to place an 3d object on a specified point in the ar screen (ex - Placing an object on hand).Is is possible to do it without using a plane detector of Ar sceneform as I do not want it place it on the place but at specified point on screen.Please provide me a source if anyone has answer for it
You can place an object at a point on the screen rather than on a plane - there are several approaches. One is to set an anchor relative to the camera position.
See this answer: https://stackoverflow.com/a/53175458/334402 and full code is available here if you want to play with it: https://github.com/mickod/LineView
Note, that this will put it at that point in the 'world' absolutely - i.e. not relative to a person's hand. So if the person moves their hand the renderable will remain in the place you put it, not move with their hand.

Object on 2D plane ARCore android

I am working on an app where I need to use ARCore. Since I don't have any prior experience in ARCore. I don't know how to do it.
REQUIREMENT
I have a requirement where I have to open a camera and place an ARObject on any x,y coordinates.
I know the ARObject needs 'z' as well.
If My camera view shows a shelf with items placed on it, I want to show an ARObject at 3rd item from left. I already have the x,y points of that item on shelf and now just have to place the AR object.
Please let me know if it is even possible or not?
What I Tried
The anchor object. It is not created using x and y coordinates.
I tried the Hello Sceneform example from google. But to use that
I need to calibrate the ARScene first by moving the camera in a
specific manner.
I tried the Augmented Images example from Medium, which lets me add an ARObject/AugmentedImage on the camera without calibrating. But it needs an Augmented Image position as a reference. The item on shelf won't be an AugmentedImage.
Also, the camera view very blur on the ARScene.
Please guide me how can I achieve this. Any tutorial links are welcome

how to attach overlay to detected object using android and OpenCV

I'm writing an android app using OpenCV for my masters that will be something like a game. The main goal is to a detect a car in selected area. The "prize" will be triggered randomly while detecting cars. When the user will hit the proper car I want to display a 3D object overlay on the screen and attach it to the middle of the car and keep it there so when the user will change the angle of his view on the car, the object will also be seen from diffrent angle.
at the moment I have EVERYTHING beside attaching the object. I've created detection, I'm drawing the 3D overlay, I've created functions that allow me to rotate the camera etc. BUT I do not have any clue how can I attach the overlay to the specific point. Cause I don't have this I have no point to recalculate the renderer to change the overlay perspective.
Please, I really need some help, even a small idea will be fine:
How can I attach the overlay to the specific in real world
(Sorry, I couldn't comment. Need at least 50 points to do that ... :P )
I assume your image of the car is coming from a camera feed and you are drawing 3d car in opengl. If so, then you can try this:
You set the pixel format of the opengl layer as RGBA_8888, so that you can set the background of the opengl camera as a transparent color.
You take a relative layout as layout of your activity.
first you add the opencv camera layout to it as full height and width.
then you add opengl layer as full height and width.
you get the position of the real car from opencv layer as pixel value or something you did.
then scale it to your opengl parameters so that you can draw it on the right spot.
it worked for me. hope it works for you too.

How can I make OpenGL interactive?

I am making an Android game app and am using openGL to load 3D obj models into the app. I would like to know if anyone can help me on how to make these objects interactive. All I really need is to make an object clickable, but it would be cool to learn more such as dragging it around the screen and such.
Thanks for any help.
What you want is called picking.
This isn't a easy task an depending on want you want to do exaclty you have mutliple options to do it.
The Problem: You want to select an Object in 3D space based on a 2D coordinate (mouse/touch position). Because of the 2D mouse coordinates you miss one coordinate to determine the exact click position in the 3D space.
One possible option would be:
Render your object with a specific color (e.g. completely red)
After that save the current display buffer to a variable
Clear the display buffer and render your model again with the standard settings (this is the screen that is displayed to the user)
determine the click/touch position translate it to a coordinate on your display area
check the color at this coordinate in your saved display buffer. If the color at this position is red, the user clicked/touch an object
This approach isn't that flexible but the implementation is very simple compared to other solutions. It is limited because you can only detect if the used clicked/touched a certain object or not but you cannot determine the exact position on the object.
Another possible option is to compute a ray on the 3D world based on the 2D click position and then determine all objects in 3D space that collide with this ray. This is called ray picking.
You can find a OpenGl tutorial for ray picking here
The example uses glRenderMode, glLoadName, etc. which maybe isn't the best choice if you are not using the fixed function pipeline (e.g. you are using custom shaders, etc.).
Another option whould be to do the math and compute the ray vector yourself based on click position, viewport and projection matrices. If you want to do this the documentation of gluUnproject can help you.

OpenGL heads up display

For an opengl(es) game, the 'camera' moves about the xyz axes in response to user input via gllookat. Works so far. However I need to implement a heads up display that is in a static position at the corner of the screen. Knowing the current location of the camera, I draw the HUD. When the camera moves, the HUD jostles for a frame or two before returning to its proper location. Is there some good way of drawing a HUD in opengl that is not affected by the camera?
Why not just reset the MVT matrix to use a fixed camera position before you draw the hud - and given that the camera and view for the hud is fixed you only need to calculate it once. I've done this on gles2.0, but it should work in earlier versions as well.
When you start drawing a frame remember your last position (save it in a variable) and use this data. This should remove all problems.

Categories

Resources