I am making an Android app for Unity, using my own VR engine. It is small, and I have got looking around working perfectly. The only problem I am experiencing is where I cannot get my eyes to focus on the objects in front of them. I get double vision where my left eye sees objects too far to the left, and right eye to the right. I have tried pointing the eye cameras slightly inwards and moving them based on a raycast to find out where they are looking.
I am guessing it could be something to do with pointing the eyes outwards, my rig - nintendo labo headset with android phone inside [ making do with what I've got ;) ] - unfortunately the phone and lenses don't quite line up but this doesn't seem to affect one of my other projects, or perhaps I need to distort my camera in a special way.
Honestly, I have no idea! Some help from an expert or anyone who is slightly clued up in the subject would be greatly appreciated :D
It turns out I literally just need to point the cameras out rather than in
Related
I am working in the google cardboard environment, and i was wondering how to change the position of one eye in the vr camera system.
My goal is to get both eyes perfectly aligned so they produce the exact same image. Currently unity automatically offsets one eye to give the scene a stereoscopic view. I tried creating two separate cameras, one for left eye and right eye, and then adjusting the position of one until i got both eye to see the same image. However when i tested this on a different phone, the images were offset again.
This problem has been throwing me for a loop and i would greatly appreciate any advise or help!
Thank you!!
I recommend you to use GVR SDK 0.6 version, there's lotmore options. And best Unity version for this SDK is 5.6.2.
Here's some useful video:
https://www.youtube.com/watch?v=JKC1YkpE-8M
i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.
I have a little problem. I am trying to make air hockey android game (something like glow hockey). And durning my adventure with first steps of getting know AndEngine, I've found a following problem:
In air hockey as you may know, player's disc can be moveable by touching it.
Problem is, that I need to get velocity and vector of player's moves. Why? I want to make perfect physics when player's body collides with hockey's disc(jumps to vector with proper player's velocity). I mean when player moves slowly- hockey's disk, after collision with bodies, will obtain low velocity but vector won't change. etc. Maybe you can tell me better solution of my problem? Are there any AndEngine solutions for this?
Thank you a lot!
There are two ways of solving this. First would be through the AndEngine Box2D extension. You could apply circular "bodies" to the pucks and the disk, and play around with physics elements like the mass and friction till you are happy with your result.
Forum: http://www.andengine.org/forums/physics-box2d-extension/
The other option is to do it manually, which I would only do so if you want a challenge. Personally I would avoid this as there is a lot of maths involved which Box2D will do for you :)
I'm researching about 2D game for Android to implement an Android Game Project. My project looks nearly like PaperToss. Instance of throwing a page, my game will throw a coin. Suppose that I have a coin put in three-dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1/100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve.
Determine the formulas can be used to compute the coordinates of the coin at time t.
This problem is under-researching. I have no idea to solve this problem.
Mapping three-dimensional points to a two-dimensional and use those new coordinates (a two-dimensional coordinates) to draw our coin on screen. I have found two solutions for this problem: Orthographic projection & Perspective projection
However, my old friend said that OpenGL supports to solve problems like my problems. Any body have experiences about my problems?
Help me please :)
Thank for reading my question.
My personal opinion is to use a 3D engine since I don't like doing hacky stuff trying to fake 3D with 2D. However, 2D is much more beginner friendly, so if you're not planning to do anything fancy like actually emulating how coin actually rotates in air then I think it's doable in 2D.
My approach to do this would be to use a framework like libgdx instead of dealing with OpenGL ES directly. They have built in support for view projection (PerspectiveCamera), high level 2D/3D rendering support and even a wrapper for OpenGL ES if you really want to go low level. I'm not so sure about how you would approach the physics part right now.
Good luck....
I' developing a small android maze game and I'm experiencing a strange effect which I can only describe via screenshot: http://www.virtualalbum.eu/fu/39/cepp1523110123182951.jpg
At first I thought I needed to set up antialiasing but the advices I followed to enable it changed nothing, and the effect appears to be a little too evident to be that anyway.
The labyrinth is composed with rectangular-based pieces for the walls and small square-based pillars between walls and on edges, plus a big square as the floor.
There are 4 lights, I don't know if that matters
I've been thinking about removing the small pillar faces adjacent to walls as you shouldn't see them anyway, but that would mean writing a lot of code and still wouldn't fix the zigzag with the floor.
Thanks a lot,
J
EDIT: After some more testing I'm starting to think it may be a z-fighting issue, does anyone has any idea on how to increase the depth buffer precision on android?
I managed to fix it, settinggl.glDepthFunc(GL10.GL_LEQUAL); the zigzag on the floor disappeared (as it's the first thing I draw), I was still having issues with the walls but for that I wrote some extra code (it wasn't that much after all) and I'm also saving some triangle.