Weird graphical effect on Android - android

I' developing a small android maze game and I'm experiencing a strange effect which I can only describe via screenshot: http://www.virtualalbum.eu/fu/39/cepp1523110123182951.jpg
At first I thought I needed to set up antialiasing but the advices I followed to enable it changed nothing, and the effect appears to be a little too evident to be that anyway.
The labyrinth is composed with rectangular-based pieces for the walls and small square-based pillars between walls and on edges, plus a big square as the floor.
There are 4 lights, I don't know if that matters
I've been thinking about removing the small pillar faces adjacent to walls as you shouldn't see them anyway, but that would mean writing a lot of code and still wouldn't fix the zigzag with the floor.
Thanks a lot,
J
EDIT: After some more testing I'm starting to think it may be a z-fighting issue, does anyone has any idea on how to increase the depth buffer precision on android?

I managed to fix it, settinggl.glDepthFunc(GL10.GL_LEQUAL); the zigzag on the floor disappeared (as it's the first thing I draw), I was still having issues with the walls but for that I wrote some extra code (it wasn't that much after all) and I'm also saving some triangle.

Related

Unity VR How to focus eyes correctly

I am making an Android app for Unity, using my own VR engine. It is small, and I have got looking around working perfectly. The only problem I am experiencing is where I cannot get my eyes to focus on the objects in front of them. I get double vision where my left eye sees objects too far to the left, and right eye to the right. I have tried pointing the eye cameras slightly inwards and moving them based on a raycast to find out where they are looking.
I am guessing it could be something to do with pointing the eyes outwards, my rig - nintendo labo headset with android phone inside [ making do with what I've got ;) ] - unfortunately the phone and lenses don't quite line up but this doesn't seem to affect one of my other projects, or perhaps I need to distort my camera in a special way.
Honestly, I have no idea! Some help from an expert or anyone who is slightly clued up in the subject would be greatly appreciated :D
It turns out I literally just need to point the cameras out rather than in

Android pixelation of HTML5 canvas

I have a really weird effect happening on Android using an HTML5 canvas. Here is the code in question, it is using is the quadratic curve:
ctx.beginPath();
ctx.strokeStyle = wave.stroke;
ctx.moveTo(wave.sx, wave.sy);
ctx.quadraticCurveTo(
wave.x, wave.y,
wave.ex, wave.ey
);
ctx.stroke();
And it draws upon itself multiple times:
http://codepen.io/EightArmsHQ/pen/9f899c4c64ab49113988055432b11a6b
Here it is on an iPhone:
But here it is on Android:
I'm saying Android but I've heard it's super smooth on a Galaxy S6.
Just as a side note I'm not very familiar with graphics (i.e. GPU etc) in general so a bit sure of what terms I should even be Googling – so please be gentle if you have any obvious solutions.
I believe what you're seeing are aliasing artefacts. The curve covers less than a pixel and for some reason, the rasterization might completely miss some of these. This seems to be explicitly when drawing quadratic curves while lines and even beziers seem to work fine for me, so you could approximate them with these.

Direction to take with Android Graphics

I am looking at making a simple game. Without giving out the entire story, I need to draw two pieces of fruit (with arms and legs), who do different movements. They can do a few different actions (less than 5) and they also react to each others actions.
I'd like it to look simple. Very 2D, kids sort of graphics. Maybe shaded, but nice bright happy colours.
Let's say an action is to 'throw ball'. I'd like to see a semi smooth arm action. Smooth if possible.
So, I found a tutorial, which used sprites, and a PNG with 3 different states of a person walking. So, very basic. And I was able to make it walk across the screen, leading each part of the PNG for each state, and iterating through that over and over again, while moving the image.
I got pretty happy with my progress, and would like to base my game on that sort of model - but ... is using sprites, and loading areas of the PNG to make the image move correct? My PNG would be large if I want maybe 20 images just to throw the ball.
But if that's the right way to go, then great! It seems you can go with OpenGL and all that, but that's for 3D graphics right? Using sprites, and a few PNG with images would be OK for perforamnce and all that?
OpenGL is a valid choice for 2D or 3D, you shouldn't have any performance issues.
It will work fine for your game, and would likely be much smoother than trying to use android animations, which are not hardware accelerated on Android 2.x.

AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

This is a big issue for me I'm trying to figure out for a long time already.
I'm working on an application that should include a 2D vector indoor map in it.
The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn.
My main requirement from the map are
Support touch events to detect where exactly the finger is touching.
Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing)
Optional but very nice to have - Built in ability to zoom, pan and rotate.
So far I tried AndEngine and Android's canvas.
With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine.
Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have.
I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results.
Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn.
To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible.
Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem!
Thanks!

Marker Recognition on Android (recognising Rubik's Cubes)

I'm developing an augmented reality application for Android that uses the phone's camera to recognise the arrangement of the coloured squares on each face of a Rubik's Cube.
One thing that I am unsure about is how exactly I would go about detecting and recognising the coloured squares on each face of the cube. If you look at a Rubik's Cube then you can see that each square is one of six possible colours with a thin black border. This lead me to think that it should be relativly simply to detect a square, possibly using an existing marker detection API.
My question is really, has anybody here had any experience with image recognition and Android? Ideally I'd like to be able to implement and existing API, but it would be an interesting project to do from scratch if somebody could point me in the right direction to get started.
Many thanks in advance.
Do you want to point the camera at a cube, and have it understand the configuration?
Recognizing objects in photographs is an open AI problem. So you'll need to constrain the problem quite a bit to get any traction on it. I suggest starting with something like:
The cube will be photographed from a distance of exactly 12 inches, with a 100W light source directly behind the camera. The cube will be set diagonally so it presents exactly 3 faces, with a corner in the center. The camera will be positioned so that it focuses directly on the cube corner in the center.
A picture will taken. Then the cube will be turned 180 degrees vertically and horizontally, so that the other three faces are visible. A second picture will be taken. Since you know exactly where each face is expected to be, grab a few pixels from each region, and assume that is the color of that square. Remember that the cube will usually be scrambled, not uniform as shown in the picture here. So you always have to look at 9*6 = 54 little squares to get the color of each one.
The information in those two pictures defines the cube configuration. Generate an image of the cube in the same configuration, and allow the user to confirm or correct it.
It might be simpler to take 6 pictures - one of each face, and travel around the faces in well-defined order. Remember that the center square of each face does not move, and defines the correct color for that face.
Once you have the configuration, you can use OpenGL operations to rotate the cube slices. This will be a program with hundreds of lines of code to define and rotate the cube, plus whatever you do for image recognition.
In addition to what Peter said, it is probably best to overlay guide lines on the picture of the cube as the user takes the pictures. The user then lines up the cube within the guide lines, whether its a single side (a square guide line) or three sides (three squares in perspective). You also might want to have the user specify the number of colored boxes in each row. In your code, sample the color in what should be the center of each colored box and compare it to the other colored boxes (within some tolerance level) to identify the colors. In addition to providing the recognized results to the user, it would be nice to allow the user to make changes to the recognized colors. It does not seem like fancy image recognition is needed.
Nice idea, I'm planing to use computer vision and marker detectors too, but for another project. I am still looking if there is any available information on the web, ex: linking openCV or ARtoolkit to the Android SDK. If you have any additional information, about how to link a computer vision API, please let me know.
See you soon and goodluck!
NYARToolkit uses marker detection and is made in JAVA (as well as managed C# for windows devices). I don't know how well it works on the android platform, but I have seen it used on windows mobile devices, and its very well done.
Good luck, and happy programming!
I'd suggest looking at the Andoid OpenCV library. You probably want to examine the blob detection algorithms. You may also want to consider Hough lines or Countours to detect quads.

Categories

Resources