I am working on my first VR project, in which I am displaying satellite data inside of a sphere. The camera/observer is placed in the middle of the sphere and "looks up" at the satellite data, which is rendered in all directions. I am doing this under Unity 2021 using the latest Cardboard SDK and running it on a Pixel-3 on Android 12. After some tinkering, I managed to get the scene to render, but the observer is MUCH too close to the scene. I am aware that the FOV is fixed by the device, but is seems to me that I should be able to scale the scene to "zoom out". However, nothing I have tried works, including the following;
Simply changing the size of the sphere (which is just a single "flip-normalled" object)
Changing the camera parameters (Note: I now understand that these have zero effect in VR, as the device sets the FOV).
Placing the camera object, embedded in an XRRig prefab in my case, inside an arbitrary "GameObject" and re-scaling the object (as specified here)
As in 3, but placing every object inside the GameObject
None of these have any effect on the eventual scene as built on the device. I am at a loss. Surely what I am attempting is possible? I really just want a tiny observer, i.e. to make the "sky" seem much farther away. Any/all help appreciated.
Cheers.
Perhaps you should make the satellites, or the images, smaller when rendering them to the sphere. Just scaling the sphere itself will probably just make everything larger, or smaller, to match the size of the sphere.
Related
I am using a square sprite in Unity for making Android game. I want this rectangle to be always placed at the bottom of the screen. This strip acts as a ground and other objects can fall on it. I want the lower side of this rectangle to stick to the lower side of the screen. How to do it?.If i try to place it there manually then after I change the screen resolution the placement gets disturbed?Also the camera is not moving, so I only want to fix the position of this strip with respect to the camera once. (i think so). What should i do?
We need a little more information. Sprites at the lower side of the screen, usually intend to be in the UI. Is this the case? If it is, the way to do it is very different from a normal sprite in the world.
(Since I am not allowed to comment, I will try to answer for both cases, however I am not at a computer capable of running Unity, so I can't really provide a concrete answer (aka: Code))
UI:
Add an Image to your Canvas. Go to your anchor preset (inside the Rect Transform) and set it to the appropriate position. Then, in case you want the image to not be stretched, go to the Image component and Check on the "Maintain Aspect Ratio" (Or something like that) option. Add the sprite to the image and you are all set.
World:
Here, the situation is way more complicated. You first need to get the screen dimensions, then calculate the size of the object according to the aspect ratio, then have some Camera.ScreenToWorldPoint conversion and finally use the LookAt method in the Update() in order to have the sprite face the camera at all times. Or use the UI layer as described above, let's be honest that is probably what you need.
Context: I'm currently working on a Augmented Reality (AR) application using OpenGL ES 2.0 and some AR glasses running on Android. My goal is to display a virtual cursor at the tip of a real object : a screwdriver. Both the glasses and the screwdriver locations are tracked by a fixed external camera. The left image just below can give you an idea of the setup.
Things that are working: So far, I'm able to display a virtual 3D object (for example a cube) at a given location in space. For example, I am able to position it at (more or less 1cm from) the tip of a tracked screwdriver. When I just rotate the head, the virtual cube gives the impression to "stay at the same place" in the real world, which is nice. This behavior is what I expected, and is consistent with its real-world anchor.
Issue: However, when I do a translation with the head (and thus a translation of the opengl camera), the cube seems to have a strange spatial offset, like if it was shifted from the object's tip (case 2 in the drawing above). This shift can be pretty significant (until 5 or 6 cm), and unconsistent with the real-world. But if I align the object exactly with any of the camera axis, the cube seems well-placed at the tip of the object, which confuses me.
Question: Is it just a strange visual perspective effect ? How can it work with head rotations but not head translations ? Did I miss something about perspective projection in OpenGL ES ?
Implementation details The fixed external camera is the origin of world coordinates. It is really precise, and gives me both the world-space position and rotation of each object (including the glasses and the screwdriver). To be more precise, it continuously send this data via Bluetooth to my Android program to make sure what the user can see is up-to-date.
In the case 1, this works like a charm: the camera correctly detects that the screwdriver is at position (0, 0, 1 meter) and whatever rotation for example, I display a cube centered around that position, and it appears correctly placed. But after a head translation (case 2), the screwdriver is still detected at the correct position (it didn't move after all), but the cube is shifted in a way that does not make sense to me.
If it was a small offset, I would put that on an accumulation of small errors, but here it seems to big to be the only explanation. Depending on the head translation I do, the cube gains a different offset and overall give the impression not to have a single fixed position in the world.
I am using perspective projection with the FOV and aspect ration of the AR glasses. The position of the opengl camera is set to the position of the AR glasses, and the Look-at values are computed according to the direction the head is currently facing.
If I modify the FOV, I loose the expected behavior I have about head rotations and correct positionning. Finally, I am using the glasses as a stereo display.
I am trying to implement something like the technique described in this (old) paper to use the phone camera's video frames to create an illusion of environment mapping in an AR app.
I want to take the camera frame, divide it into sub-areas and then use those as faces on the cube map. The division of the camera frame would look something like this:
Now the X area is easy, I can use glCopyTexImage2D to copy that square area to my cubemap texture. But I need help with the trapezoid shaped areas around X (forget about the trianlges for now).
How can I take those trapezoidal areas and distort them into square textures? I think I need the opposite transformation of the later occurring perspective projection, so that the two will cancel each other out in the final render if I render the cubemap as a skybox around my camera (does that explain what I want?).
Before doing this I tried a simpler step of putting the square X area on every side of the cubemap just to see if glCopyTexImage2D can even be used for this. It can, but the results are not rotated right, some faces are "upside down" when I render the cubemap as a skybox. The question is similar: How can I rotate them before using them as textures?
I also thought about solving the problem from the other side and modifying the "texture coordinates" to make the necessary adjustments, but that also does not seem easy since the lookup in the fragment shader with "textureCube" is more complicated than a normal texture lookup.
Any ideas?
I'm trying to do this in my AR app on Android with OpenGL ES 2.0 but I guess more general OpenGL advice might also be useful.
Update
I have come to the conclusion that this is not worth pursuing anymore. The paper makes it look nice, but my experiments with a phone camera have shown a major contradiction. If you want to reflect the environment in an object rendered in AR, the camera view is very limited. When the camera is far away from the tracked object you have enough environment information for a good reflection, but you will barely see it because the camera is far away. But when you bring the camera closer to see the awesome reflection in detail, the tracked object will fill most of the camera's field of view and you barely have any environment to reflect anymore. So in either case you lose and the result is not worth the effort.
It seems that you need to create mesh with UV mapping described in article and render it with texture from camera to another texture. Then use it as cubemap.
I'm writing an android app using OpenCV for my masters that will be something like a game. The main goal is to a detect a car in selected area. The "prize" will be triggered randomly while detecting cars. When the user will hit the proper car I want to display a 3D object overlay on the screen and attach it to the middle of the car and keep it there so when the user will change the angle of his view on the car, the object will also be seen from diffrent angle.
at the moment I have EVERYTHING beside attaching the object. I've created detection, I'm drawing the 3D overlay, I've created functions that allow me to rotate the camera etc. BUT I do not have any clue how can I attach the overlay to the specific point. Cause I don't have this I have no point to recalculate the renderer to change the overlay perspective.
Please, I really need some help, even a small idea will be fine:
How can I attach the overlay to the specific in real world
(Sorry, I couldn't comment. Need at least 50 points to do that ... :P )
I assume your image of the car is coming from a camera feed and you are drawing 3d car in opengl. If so, then you can try this:
You set the pixel format of the opengl layer as RGBA_8888, so that you can set the background of the opengl camera as a transparent color.
You take a relative layout as layout of your activity.
first you add the opencv camera layout to it as full height and width.
then you add opengl layer as full height and width.
you get the position of the real car from opencv layer as pixel value or something you did.
then scale it to your opengl parameters so that you can draw it on the right spot.
it worked for me. hope it works for you too.
I'm writing an Android and iOS engine in C++ and currently focusing on Android with the NDK.
I'd like to render to a viewport of a smaller size (say 600x360) and automatically upscale this to the native rez (say 800x480.) Currently the smaller viewport displays in a lower corner of my screen with black regions.
My problem is I don't know of a simple way to do this transparently using the NDK. There is a GLSurfaceview.setScaleX (and Y) function in API level 11, which would be perfect, but doesn't exist in API level 9, which I am targeting. Another bad solution is to render to a FBO and blit that to the screen as a final step.
I am considering simply story a scaling matrix and asking the user of the engine (for now just me) to always multiply vertices by this when drawing to the screen. This would be similar to using glPushMatrix.
I searched for a while and couldn't find a good solution. Does anyone know how to help?
What you can do is get the SurfaceHolder from GLSurfaceView, GLSurfaceView.getHolder() and then set the resolution you desire by calling SurfaceHolder.setFixedSize(width, height).
In my case the GLSurfaceView has a FrameLayout root which fills the screen, I am not sure if thats required - I have it because I add other elements on top - but if you set the size and it doesnt fill the screen then you know what's missing!
Using a FrameBuffer is also a valid way and you could draw some cool effects with it as well, the way above is just faster when the only thing you want to do is scale the rendering down (or possibly up? I haven't tried).