Spatial offset of a virtual object with Opengl ES perspective - android

Context: I'm currently working on a Augmented Reality (AR) application using OpenGL ES 2.0 and some AR glasses running on Android. My goal is to display a virtual cursor at the tip of a real object : a screwdriver. Both the glasses and the screwdriver locations are tracked by a fixed external camera. The left image just below can give you an idea of the setup.
Things that are working: So far, I'm able to display a virtual 3D object (for example a cube) at a given location in space. For example, I am able to position it at (more or less 1cm from) the tip of a tracked screwdriver. When I just rotate the head, the virtual cube gives the impression to "stay at the same place" in the real world, which is nice. This behavior is what I expected, and is consistent with its real-world anchor.
Issue: However, when I do a translation with the head (and thus a translation of the opengl camera), the cube seems to have a strange spatial offset, like if it was shifted from the object's tip (case 2 in the drawing above). This shift can be pretty significant (until 5 or 6 cm), and unconsistent with the real-world. But if I align the object exactly with any of the camera axis, the cube seems well-placed at the tip of the object, which confuses me.
Question: Is it just a strange visual perspective effect ? How can it work with head rotations but not head translations ? Did I miss something about perspective projection in OpenGL ES ?
Implementation details The fixed external camera is the origin of world coordinates. It is really precise, and gives me both the world-space position and rotation of each object (including the glasses and the screwdriver). To be more precise, it continuously send this data via Bluetooth to my Android program to make sure what the user can see is up-to-date.
In the case 1, this works like a charm: the camera correctly detects that the screwdriver is at position (0, 0, 1 meter) and whatever rotation for example, I display a cube centered around that position, and it appears correctly placed. But after a head translation (case 2), the screwdriver is still detected at the correct position (it didn't move after all), but the cube is shifted in a way that does not make sense to me.
If it was a small offset, I would put that on an accumulation of small errors, but here it seems to big to be the only explanation. Depending on the head translation I do, the cube gains a different offset and overall give the impression not to have a single fixed position in the world.
I am using perspective projection with the FOV and aspect ration of the AR glasses. The position of the opengl camera is set to the position of the AR glasses, and the Look-at values are computed according to the direction the head is currently facing.
If I modify the FOV, I loose the expected behavior I have about head rotations and correct positionning. Finally, I am using the glasses as a stereo display.

Related

Simulating FOV changes in VR for Google-Cardboard with Unity

I am working on my first VR project, in which I am displaying satellite data inside of a sphere. The camera/observer is placed in the middle of the sphere and "looks up" at the satellite data, which is rendered in all directions. I am doing this under Unity 2021 using the latest Cardboard SDK and running it on a Pixel-3 on Android 12. After some tinkering, I managed to get the scene to render, but the observer is MUCH too close to the scene. I am aware that the FOV is fixed by the device, but is seems to me that I should be able to scale the scene to "zoom out". However, nothing I have tried works, including the following;
Simply changing the size of the sphere (which is just a single "flip-normalled" object)
Changing the camera parameters (Note: I now understand that these have zero effect in VR, as the device sets the FOV).
Placing the camera object, embedded in an XRRig prefab in my case, inside an arbitrary "GameObject" and re-scaling the object (as specified here)
As in 3, but placing every object inside the GameObject
None of these have any effect on the eventual scene as built on the device. I am at a loss. Surely what I am attempting is possible? I really just want a tiny observer, i.e. to make the "sky" seem much farther away. Any/all help appreciated.
Cheers.
Perhaps you should make the satellites, or the images, smaller when rendering them to the sphere. Just scaling the sphere itself will probably just make everything larger, or smaller, to match the size of the sphere.

Android Sceneform Plane Detection and their angles

I am trying to develop an Android app which is trying to draw a perfect line directly in front of me. My reference point is my phone, which means that the line has to parallel to my phone's left side.
I have following issues:
I am using Sceneform, so there is no "onSurfaceCreated" callback.(?)
I assume that the white-dots shows the surface. But, then I think if there is a detected surface then I can place a Shape on it. But is can not happen sometimes. And sometimes, I can place a Shape even if there are no visible white-dots.
When I try to draw a line between the points (0,0,0) to (1,0,0), it is not always parallel to the left side of my phone. I assume that the reason of this is related with the following fact :
angle between the left-bottom corner of the detected surface and the left-top corner is not zero. (If we consider the coordinate system as follows : phone's left side is y-axis, bottom is the x-axis.)And this angle changes each time I reopen the app.
These are more theory questions than the implementation. So, I need someone to prove or disprove, or give me guideline.
1) There isn't method like onSurfaceCreated.
2) Not all the detected planes are covered with white-dots. Is is intended because if all the detected planes are rendered with white-dots, it would confuse the users
3) When you talk about the points(0,0,0) and (1,0,0), is it world position or local position? Whether it is world position or local position, you can not draw a line parallel to your left side of phone in the way you approach.

How to use the numbers from Game Rotation Vector in Android?

I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRo‌tationMatrixFromVecto‌r converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.

Dynamic Environment mapping from camera in Augmented Reality setting

I am trying to implement something like the technique described in this (old) paper to use the phone camera's video frames to create an illusion of environment mapping in an AR app.
I want to take the camera frame, divide it into sub-areas and then use those as faces on the cube map. The division of the camera frame would look something like this:
Now the X area is easy, I can use glCopyTexImage2D to copy that square area to my cubemap texture. But I need help with the trapezoid shaped areas around X (forget about the trianlges for now).
How can I take those trapezoidal areas and distort them into square textures? I think I need the opposite transformation of the later occurring perspective projection, so that the two will cancel each other out in the final render if I render the cubemap as a skybox around my camera (does that explain what I want?).
Before doing this I tried a simpler step of putting the square X area on every side of the cubemap just to see if glCopyTexImage2D can even be used for this. It can, but the results are not rotated right, some faces are "upside down" when I render the cubemap as a skybox. The question is similar: How can I rotate them before using them as textures?
I also thought about solving the problem from the other side and modifying the "texture coordinates" to make the necessary adjustments, but that also does not seem easy since the lookup in the fragment shader with "textureCube" is more complicated than a normal texture lookup.
Any ideas?
I'm trying to do this in my AR app on Android with OpenGL ES 2.0 but I guess more general OpenGL advice might also be useful.
Update
I have come to the conclusion that this is not worth pursuing anymore. The paper makes it look nice, but my experiments with a phone camera have shown a major contradiction. If you want to reflect the environment in an object rendered in AR, the camera view is very limited. When the camera is far away from the tracked object you have enough environment information for a good reflection, but you will barely see it because the camera is far away. But when you bring the camera closer to see the awesome reflection in detail, the tracked object will fill most of the camera's field of view and you barely have any environment to reflect anymore. So in either case you lose and the result is not worth the effort.
It seems that you need to create mesh with UV mapping described in article and render it with texture from camera to another texture. Then use it as cubemap.

Android OpenGL ES 1.1 flickering

While you are promoting my Android project, I discovered a strange.
I can display the map in the ocean Android OpenGL ES 2D graphics.
So, to be used only to determine the phase order of the object, the value is reduced to about 0.0001 Z-axis.
I tried over 1000 times the size of the object In the meantime.
Then, a phenomenon depending on the zoom in / zoom out, some objects flickering occurred.
Why such problems occur??
It is the problem of the target terminal-specific this can not be resolved if?
Or is it a problem of Android OpenGL ES itself?
***More....
The photo below is what you screen shot every time the screen of the actual device.
***I occurs when such a phenomenon to zoom in / zoom out each time.
I assume what you are experiencing is z-fighting: http://en.wikipedia.org/wiki/Z-fighting
This results due to the fact that your objects are too close together so that the z-buffer for certain pixels can't distinguish between which pixel is below or above the other.
You have three choices now:
1) Adjust your projection, specifically adjust znear and zfar values. Read more here: http://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
2) Increase the distance between both objects
3) Since you are drawing a 2D scene, you might use orthogonal projection. In that case it might be worth not to use depth buffering at all and just draw the objects from back to front (Painters Algorithm, http://en.wikipedia.org/wiki/Painters_algorithm).

Categories

Resources