opengl object vibrate after moving a distance - android

I have an object which moves on a terrain and a third person camera follow it, after I move it for some distance in different directions it begin to shaking or vibrating even if it is not moving and the camera rotates around it, this is the moving code of the object
double& delta = engine.getDeltaTime();
GLfloat velocity = delta * movementSpeed;
glm::vec3 t(glm::vec3(0, 0, 1) * (velocity * 3.0f));
//translate the objet atri before rendering
matrix = glm::translate(matrix, t);
//get the forward vetor of the matrix
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f = glm::normalize(f);
f = f * (velocity * 3.0f);
f = -f;
camera.translate(f);
and the camera rotation is
void Camera::rotate(GLfloat xoffset, GLfloat yoffset, glm::vec3& c, double& delta, GLboolean constrainpitch) {
xoffset *= (delta * this->rotSpeed);
yoffset *= (delta * this->rotSpeed);
pitch += yoffset;
yaw += xoffset;
if (constrainpitch) {
if (pitch >= maxPitch) {
pitch = maxPitch;
yoffset = 0;
}
if (pitch <= minPitch) {
pitch = minPitch;
yoffset = 0;
}
}
glm::quat Qx(glm::angleAxis(glm::radians(yoffset), glm::vec3(1.0f, 0.0f, 0.0f)));
glm::quat Qy(glm::angleAxis(glm::radians(xoffset), glm::vec3(0.0f, 1.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
glm::mat4 rotY = glm::mat4_cast(Qy);
view = glm::translate(view, c);
view = rotX * view;
view = view * rotY;
view = glm::translate(view, -c);
}

float is sometimes not enough.
I use double precision matrices on CPU side to avoid such problems. But as you are on Android it might not be possible. For GPU use floats again as there are no 64bit interpolators yet.
Big numbers are usually the problem
If your world is big then you are passing big numbers into the equations multiplying any errors and only at the final stage the stuff is translated relative to camera position meaning the errors stay multiplied but the numbers got clamped so error/data ratio got big.
To lower this problem before rendering convert all vertexes to coordinate system with origin at or near your camera. You can ignore rotations just offset the positions.
This way you will got higher errors only far away from camera which is with perspective not visible anyway... For more info see:
ray and ellipsoid intersection accuracy improvement
Use cumulative transform matrix instead of Euler angles
for more info see Understanding 4x4 homogenous transform matrices and all the links at bottom of that answer.

This sounds like a numerical effect to me. Even small offsets coming from your game object will influence the rotation of the following camera with small movements / rotations and it looks like a vibrating object / camera.
So what you can do is:
Check if the movement above a threshold value before calculating a new rotation for your camera
When you are above this threshold: do a linear interpolation between the old and the new rotation using the lerp-algorithm for the quaternion ( see this unity answer to get a better understanding how your code can look like: Unity lerp discussion )

Related

Best way to draw a path traveled

I'm making a application to track a veicle based in GPS coordinates.
I created a SurfaceView to draw the field, vehicle and the path (route) for him.
The result looked like this:
The black dots represent the coming of GPS coordinates, and blue rectangles would be the area covered by the path traveled. (the width of the path is configurable)
The way I'm drawing with blue rectangles (this is my question) which are the area covered by the path traveled. (the width of the path is configurable)
With that I need to overcome some situation.
I need to calculate the field's rotation angle so that the path always get left behind. (completed)
I need to calculate the angle of rotation of each rectangle so they are facing towards the vehicle. (completed)
In the future I will need:
Detect when the vehicle passes twice in the same place. (based on the path traveled)
Calculate the area (m²) all traveled by the vehicle.
I would like some tips for draw this path.
My code:
public void draw(Canvas canvas) {
Log.d(getClass().getSimpleName(), "draw");
canvas.save();
// translate canvas to vehicle positon
canvas.translate((float) center.cartesian(0), (float) center.cartesian(1));
float fieldRotation = 0;
if (trackerHistory.size() > 1) {
/*
Before drawing the way, only takes the last position and finds the angle of rotation of the field.
*/
Vector lastPosition = new Vector(convertToTerrainCoordinates(lastPoint));
Vector preLastPosition = new Vector(convertToTerrainCoordinates(preLastPoint));
float shift = (float) lastPosition.distanceTo(preLastPosition);
/*
Having the last coordinate as a triangle, 'preLastCoord' saves the values of the legs, while 'shift' is the hypotenuse
*/
// If the Y offset is negative, then the opposite side is the Y displacement
if (preLastPosition.cartesian(1) < 0) {
// dividing the opposite side by hipetenusa, we have the sine of the angle that must be rotated.
double sin = preLastPosition.cartesian(1) / shift;
// when Y is negative, it is necessary to add or subtract 90 degrees depending on the value of X
// The "Math.asin()" calculates the radian arc to the sine previously calculated.
// And the "Math.toDegress()" converts degrees to radians from 0 to 360.
if (preLastPosition.cartesian(0) < 0) {
fieldRotation = (float) (Math.toDegrees(Math.asin(sin)) - 90d);
} else {
fieldRotation = (float) (Math.abs(Math.toDegrees(Math.asin(sin))) + 90d);
}
}
// if not, the opposite side is the X offset
else {
// dividing the opposite side by hipetenusa have the sine of the angle that must be rotated.
double senAngulo = preLastPosition.cartesian(0) / shift;
// The "Math.asin()" calculates the radian arc to the sine previously calculated.
// And the "Math.toDegress()" converts degrees to radians from 0 to 360.
fieldRotation = (float) Math.toDegrees(Math.asin(senAngulo));
}
}
final float dpiTrackerWidth = Navigator.meterToDpi(trackerWidth); // width of rect
final Path positionHistory = new Path(); // to draw the route
final Path circle = new Path(); // to draw the positions
/*
Iterate the historical positions and draw the path
*/
for (int i = 1; i < trackerHistory.size(); i++) {
Vector currentPosition = new Vector(convertToTerrainCoordinates(trackerHistory.get(i))); // vector with X and Y position
Vector lastPosition = new Vector(convertToTerrainCoordinates(trackerHistory.get(i - 1))); // vector with X and Y position
circle.addCircle((float) currentPosition.cartesian(0), (float) currentPosition.cartesian(1), 3, Path.Direction.CW);
circle.addCircle((float) lastPosition.cartesian(0), (float) lastPosition.cartesian(1), 3, Path.Direction.CW);
if (isInsideOfScreen(currentPosition.cartesian(0), currentPosition.cartesian(1)) ||
isInsideOfScreen(lastPosition.cartesian(0), lastPosition.cartesian(1))) {
/*
Calcule degree by triangle sides
*/
float shift = (float) currentPosition.distanceTo(lastPosition);
Vector dif = lastPosition.minus(currentPosition);
float sin = (float) (dif.cartesian(0) / shift);
float degress = (float) Math.toDegrees(Math.asin(sin));
/*
Create a Rect to draw displacement between two coordinates
*/
RectF rect = new RectF();
rect.left = (float) (currentPosition.cartesian(0) - (dpiTrackerWidth / 2));
rect.right = rect.left + dpiTrackerWidth;
rect.top = (float) currentPosition.cartesian(1);
rect.bottom = rect.top - shift;
Path p = new Path();
Matrix m = new Matrix();
p.addRect(rect, Path.Direction.CCW);
m.postRotate(-degress, (float) currentPosition.cartesian(0), (float) currentPosition.cartesian(1));
p.transform(m);
positionHistory.addPath(p);
}
}
// rotates the map to make the route down.
canvas.rotate(fieldRotation);
canvas.drawPath(positionHistory, paint);
canvas.drawPath(circle, paint2);
canvas.restore();
}
My goal is to have something like this application: https://play.google.com/store/apps/details?id=hu.zbertok.machineryguide (but only in 2D for now)
EDIT:
To clarify a bit more my doubts:
I do not have much experience about it. I would like a better way to draw the path. With rectangles it was not very good. Note that the curves are some empty spaces.
Another point is the rotation of rectangles, I'm rotating them at the time of drawing. I believe this will make it difficult to detect overlaps
I believe I need math help for the rotation of objects and overlapping detection. And it also helps to draw the path of a filled shape.
After some research time, I came to a successful outcome. I will comment on my thoughts and how was the solution.
As I explained in question, along the way I have the coordinates traveled by the vehicle, and also a setting for the width of the path should be drawn.
Using LibGDX library is ready a number of features, such as the implementation of a "orthographic camera" to work with positioning, rotation, etc.
With LibGDX I converted GPS coordinates on my side points to the road traveled. Like this:
The next challenge was to fill the path traveled. First I tried using rectangles, but the result was as shown in my question.
So the solution was to trace triangles using the side of the path as vertices. Like this:
Then simply fill in the triangles. Like this:
Finally, using Stencil, I set up OpenGL to highlight overlaps. Like this:
Other issues fixed:
To calculate the covered area, simply calculate the area of existing triangles along the path.
To detect overlapping, just check if the current position of the vehicle is within a triangle.
Thanks to:
AlexWien for the attention and for their time.
Conner Anderson by videos of LibGDX
And a special thanks to Luis Eduardo for knowledge, helped me a lot.
The sample source code.
Usually such a path is drawn using a "path" method from the graphics lib.
In that lib you can create a polyline, and give a line width.
You further specify how corners are filled. (BEVEL_JOIN, MITTER_JOIN)
The main question is wheter the path is drawn while driving or afterwords.
Afterwords is no problem.
To draw while driving might be a bit tricky to avoid to redraw the path each second.
When using the Path with moveTo and lineTo to create a polyline, then you can set a line width, and the graphics lib will do that all for you.
Then there will be no gaps, since it is a poly line.

Screen cropped issue in unity 2-D game

I developed one 2-D game in unity. First I have run game on web player my screen is cutting from both ends. Now I am creating android apk from it. Then on android devices its also cutting from both ends. We are using free aspect ratio. I dont know much about screen size. So how to overcome from this problem?
CameraFollow.cs script-
using UnityEngine;
using System.Collections;
public class CameraFollow : MonoBehaviour
{
public float xMargin = 1f; // Distance in the x axis the player can move before the camera follows.
public float yMargin = 1f; // Distance in the y axis the player can move before the camera follows.
public float xSmooth = 8f; // How smoothly the camera catches up with it's target movement in the x axis.
public float ySmooth = 8f; // How smoothly the camera catches up with it's target movement in the y axis.
public Vector2 maxXAndY; // The maximum x and y coordinates the camera can have.
public Vector2 minXAndY; // The minimum x and y coordinates the camera can have.
private Transform player; // Reference to the player's transform.
void Awake ()
{
// Setting up the reference.
player = GameObject.FindGameObjectWithTag("Player").transform;
//camera.orthographicSize = 640 / screenwidth * screenheight / 2;
}
bool CheckXMargin()
{
// Returns true if the distance between the camera and the player in the x axis is greater than the x margin.
return Mathf.Abs(transform.position.x - player.position.x) > xMargin;
}
bool CheckYMargin()
{
// Returns true if the distance between the camera and the player in the y axis is greater than the y margin.
return Mathf.Abs(transform.position.y - player.position.y) > yMargin;
}
void FixedUpdate ()
{
TrackPlayer();
}
void TrackPlayer ()
{
// By default the target x and y coordinates of the camera are it's current x and y coordinates.
float targetX = transform.position.x;
float targetY = transform.position.y;
// If the player has moved beyond the x margin...
if(CheckXMargin())
// ... the target x coordinate should be a Lerp between the camera's current x position and the player's current x position.
targetX = Mathf.Lerp(transform.position.x, player.position.x, xSmooth * Time.deltaTime);
// If the player has moved beyond the y margin...
if(CheckYMargin())
// ... the target y coordinate should be a Lerp between the camera's current y position and the player's current y position.
targetY = Mathf.Lerp(transform.position.y, player.position.y, ySmooth * Time.deltaTime);
// The target x and y coordinates should not be larger than the maximum or smaller than the minimum.
targetX = Mathf.Clamp(targetX, minXAndY.x, maxXAndY.x);
targetY = Mathf.Clamp(targetY, minXAndY.y, maxXAndY.y);
// Set the camera's position to the target position with the same z component.
transform.position = new Vector3(targetX, targetY, transform.position.z);
}
}
Since you are making a 2D game, I presume you are using the orthogonal camera. Unlike perspective camera, the orthogonal camera shows more stuff on the screen the bigger resolution you have.
You need to normalize the orthogonal size to a wanted resolution:
camera.orthographicSize = 640/screenwidth * screenheight/2
In the above code the orthogonal size is normalized to 640 pixel width. (Link)
Free Aspect is just the current game view. This does not reflect what will be seen in the final product. I recommend using one of the preset ratio's or if your up to date with Unity3d you can add your own custom resolutions.
So if you designing for phone when its vertical (GS4 used as an example) you can set it to 1080x1920. This will give you a better representation as to what you will see on your device.

Get quaternion from Android gyroscope?

The official development documentation suggests the following way of obtaining the quaternion from the 3D rotation rate vector (wx, wy, wz).
// Create a constant to convert nanoseconds to seconds.
private static final float NS2S = 1.0f / 1000000000.0f;
private final float[] deltaRotationVector = new float[4]();
private float timestamp;
public void onSensorChanged(SensorEvent event) {
// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0) {
final float dT = (event.timestamp - timestamp) * NS2S;
// Axis of the rotation sample, not normalized yet.
float axisX = event.values[0];
float axisY = event.values[1];
float axisZ = event.values[2];
// Calculate the angular speed of the sample
float omegaMagnitude = sqrt(axisX*axisX + axisY*axisY + axisZ*axisZ);
// Normalize the rotation vector if it's big enough to get the axis
// (that is, EPSILON should represent your maximum allowable margin of error)
if (omegaMagnitude > EPSILON) {
axisX /= omegaMagnitude;
axisY /= omegaMagnitude;
axisZ /= omegaMagnitude;
}
// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
float thetaOverTwo = omegaMagnitude * dT / 2.0f;
float sinThetaOverTwo = sin(thetaOverTwo);
float cosThetaOverTwo = cos(thetaOverTwo);
deltaRotationVector[0] = sinThetaOverTwo * axisX;
deltaRotationVector[1] = sinThetaOverTwo * axisY;
deltaRotationVector[2] = sinThetaOverTwo * axisZ;
deltaRotationVector[3] = cosThetaOverTwo;
}
timestamp = event.timestamp;
float[] deltaRotationMatrix = new float[9];
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
// User code should concatenate the delta rotation we computed with the current rotation
// in order to get the updated rotation.
// rotationCurrent = rotationCurrent * deltaRotationMatrix;
}
}
My question is:
It is quite different from the acceleration case, where computing the resultant acceleration using the accelerations ALONG the 3 axes makes sense.
I am really confused why the resultant rotation rate can also be computed with the sub-rotation rates AROUND the 3 axes. It does not make sense to me.
Why would this method - finding the composite rotation rate magnitude - even work?
Since your title does not really match your questions, I'm trying to answer as much as I can.
Gyroscopes don't give an absolute orientation (as the ROTATION_VECTOR) but only rotational velocities around those axis they are built to 'rotate' around. This is due to the design and construction of a gyroscope. Imagine the construction below. The golden thing is rotating and due to the laws of physics it does not want to change its rotation. Now you can rotate the frame and measure these rotations.
Now if you want to obtain something as the 'current rotational state' from the Gyroscope, you will have to start with an initial rotation, call it q0 and constantly add those tiny little rotational differences that the gyroscope is measuring around the axis to it: q1 = q0 + gyro0, q2 = q1 + gyro1, ...
In other words: The Gyroscope gives you the difference it has rotated around the three constructed axis, so you are not composing absolute values but small deltas.
Now this is very general and leaves a couple of questions unanswered:
Where do I get an initial position from? Answer: Have a look at the Rotation Vector Sensor - you can use the Quaternion obtained from there as an initialisation
How to 'sum' q and gyro?
Depending on the current representation of a rotation: If you use a rotation matrix, a simple matrix multiplication should do the job, as suggested in the comments (note that this matrix-multiplication implementation is not efficient!):
/**
* Performs naiv n^3 matrix multiplication and returns C = A * B
*
* #param A Matrix in the array form (e.g. 3x3 => 9 values)
* #param B Matrix in the array form (e.g. 3x3 => 9 values)
* #return A * B
*/
public float[] naivMatrixMultiply(float[] B, float[] A) {
int mA, nA, mB, nB;
mA = nA = (int) Math.sqrt(A.length);
mB = nB = (int) Math.sqrt(B.length);
if (nA != mB)
throw new RuntimeException("Illegal matrix dimensions.");
float[] C = new float[mA * nB];
for (int i = 0; i < mA; i++)
for (int j = 0; j < nB; j++)
for (int k = 0; k < nA; k++)
C[i + nA * j] += (A[i + nA * k] * B[k + nB * j]);
return C;
}
To use this method, imagine that mRotationMatrix holds the current state, these two lines do the job:
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
mRotationMatrix = naivMatrixMultiply(mRotationMatrix, deltaRotationMatrix);
// Apply rotation matrix in OpenGL
gl.glMultMatrixf(mRotationMatrix, 0);
If you chose to use Quaternions, imagine again that mQuaternion contains the current state:
// Perform Quaternion multiplication
mQuaternion.multiplyByQuat(deltaRotationVector);
// Apply Quaternion in OpenGL
gl.glRotatef((float) (2.0f * Math.acos(mQuaternion.getW()) * 180.0f / Math.PI),mQuaternion.getX(),mQuaternion.getY(), mQuaternion.getZ());
Quaternion multiplication is described here - equation (23). Make sure, you apply the multiplication correctly, since it is not commutative!
If you want to simply know rotation of your device (I assume this is what you ultimately want) I strongly recommend the ROTATION_VECTOR-Sensor. On the other hand Gyroscopes are quite precise for measuring rotational velocity and have a very good dynamic response, but suffer from drift and don't give you an absolute orientation (to magnetic north or according to gravity).
UPDATE: If you want to see a full example, you can download the source-code for a simple demo-app from https://bitbucket.org/apacha/sensor-fusion-demo.
Makes sense to me. Acceleration sensors typically work by having some measurable quantity change when force is applied to the axis being measured. E.g. if gravity is pulling down on the sensor measuring that axis, it conducts electricity better. So now you can tell how hard gravity, or acceleration in some direction, is pulling. Easy.
Meanwhile gyros are things that spin (OK, or bounce back and forth in a straight line like a tweaked diving board). The gyro is spinning, now you spin, the gyro is going to look like it is spinning faster or slower depending on the direction you spun. Or if you try to move it, it will resist and try to keep going the way it is going. So you just get a rotation change out of measuring it. Then you have to figure out the force from the change by integrating all the changes over the amount of time.
Typically none of these things are one sensor either. They are often 3 different sensors all arranged perpendicular to each other, and measuring a different axis. Sometimes all the sensors are on the same chip, but they are still different things on the chip measured separately.

android opengl check visibility of a point with camera zoom

I m woring on an android opengl 1.1 2d game with a top view on a vehicule and a camera zoom relative to the vehicule speed. When the speed increases the camera zoom out to offer the player a best road visibility.
I have litte trouble finding the exact way to detect if a sprite is visible or not regarding his position and the current camera zoom.
Important precision, all of my game's objects are on the same z coord. I use 3d just for camera effect. (that's why I do not need frustrum complicated calculations)
here is a sample of my GLSurfaceView.Renderer class
public static float fov_degrees = 45f;
public static float fov_radians = fov_degrees / 180 * (float) Math.PI;
public static float aspect; //1.15572 on my device
public static float camZ; //927 on my device
#Override
public void onSurfaceChanged(GL10 gl, int x, int y) {
aspect = (float) x / (float) y;
camZ = y / 2 / (float) Math.tan(fov_radians / 2);
Camera.MINIDECAL = y / 4; // minimum cam zoom out (192 on my device)
if (x == 0) { // Prevent A Divide By Zero By
x = 1; // Making Height Equal One
}
gl.glViewport(0, 0, x, y); // Reset The Current Viewport
gl.glMatrixMode(GL10.GL_PROJECTION); // Select The Projection Matrix
gl.glLoadIdentity(); // Reset The Projection Matrix
// Calculate The Aspect Ratio Of The Window
GLU.gluPerspective(gl, fov_degrees, aspect , camZ / 10, camZ * 10);
GLU.gluLookAt(gl, 0, 0, camZ, 0, 0, 0, 0, 1, 0); // move camera back
gl.glMatrixMode(GL10.GL_MODELVIEW); // Select The Modelview Matrix
gl.glLoadIdentity(); // Reset The Modelview Matrix
when I draw any camera relative object I use this translation method :
gl.glTranslatef(position.x - camera.centerPosition.x , position.y -camera.centerPosition.y , - camera.zDecal);
Eveything is displayed fine, the problem comes from my physic thread when he checks if an object is visible or not:
public static boolean isElementVisible(Element element) {
xDelta = (float) ((camera.zDecal + GameRenderer.camZ) * GameRenderer.aspect * Math.atan(GameRenderer.fov_radians));
yDelta = (float) ((camera.zDecal + GameRenderer.camZ)* Math.atan(GameRenderer.fov_radians));
//(xDelta and yDelta are in reallity updated only ones a frame or when camera zoom change)
Camera camera = ObjectRegistry.INSTANCE.camera;
float xMin = camera.centerPosition.x - xDelta/2;
float xMax = camera.centerPosition.x + xDelta/2;
float yMin = camera.centerPosition.y - yDelta/2;
float yMax = camera.centerPosition.y + yDelta/2;
//xMin and yMin are supposed to be the lower bounds x and y of the visible plan
// same for yMax and xMax
// then i just check that my sprite is visible on this rectangle.
Vector2 phD = element.getDimToTestIfVisibleOnScreen();
int sizeXd2 = (int) phD.x / 2;
int sizeYd2 = (int) phD.y / 2;
return (element.position.x + sizeXd2 > xMin)
&& (element.position.x - sizeXd2 < xMax)
&& (element.position.y - sizeYd2 < yMax)
&& (element.position.y + sizeYd2 > yMin);
}
Unfortunately the object were disapearing too soon and appearing to late so i manuelly added some zoom out on the camera for test purpose.
I did some manual test and found that by adding approx 260 to the camera z index while calculating xDelta and yDelta it, was good.
So the line is now :
xDelta = (float) ((camera.zDecal + GameRenderer.camZ + 260) * GameRenderer.aspect * Math.atan(GameRenderer.fov_radians));
yDelta = (float) ((camera.zDecal + GameRenderer.camZ + 260)* Math.atan(GameRenderer.fov_radians));
Because it's a hack and the magic number may not work on every device I would like to understand what i missed. I guess there is something in that "260" magic number that comes from the fov or ration width/height and that could be set as a formula parameter for pixel perfect detection.
Any guess ?
My guess is that you should be using Math.tan(GameRenderer.fov_radians) instead of Math.atan(GameRenderer.fov_radians).
Reasoning:
If you used a camera with 90 degree fov, then xDelta and yDelta should be infinitely large, right? Since the camera would have to view the entire infinite plane.
tan(pi/2) is infinite (and negative infinity). atan(pi/2) is merely 1.00388...
tan(pi/4) is 1, compared to atan(pi/4) of 0.66577...

How can i draw a cylinder in OpenGL-es on Android?

Can any one help me to draw a cylinder in OpenGL-es android. Whatever i draw its look like a rectangle.
I would appreciate any tips or link.
Here is the code i've tried:
int VERTICES=180; // more than needed
float coords[] = new float[VERTICES * 3];
float theta = 0;
for (int i = 0; i < VERTICES * 3; i += 3) {
coords[i + 0] = (float) Math.cos(theta);
coords[i + 1] = (float) Math.sin(theta);
coords[i + 2] = 0;
_vertexBuffer.put(coords[i + 0]);
_vertexBuffer.put(coords[i + 1]);
_vertexBuffer.put(coords[i + 2]);
theta += Math.PI / 90;
}
This will only draw a circle. A cylinder is more complicated as you will need to define vertices in a translated z plane. And define them with correct normals (either facing in as if you were inside the cylinder -ie a tunnel or out as in looking at a pipe) which is the trickier part.
I'm currently doing this now (which is what brought me here) and have the cylinder drawn but pretty sure my normals are incorrect as my lighting looks a bit off. I'll post some code when I figure it out.
Edit : realized the code also doesn't actually draw a circle. Here is how to do that (in 2D) :
R = Radius
NUM_VERTICES = Number of vertices you want to use in circle
delta = (Math.PI / 180) * (360 / NUM_VERTICES); //get delta in radians between vertex definition
for i = 0 ; i < NUM_VERTICES ; i ++
x = R * cos(Delta * i)
y = R * sin(Delta * i))
vertices[i] = x; vertices[i+1] = y; vertices[i+2] = 0;
end for
//note may need to redefine the original vertex to complete the circle depending on which GL draw type you are using. If so just take the arg to sin / cos to be 0 to complete the loop
Last Edit* : just realized I was overcomplicating the normals by re-using some calculate normal from triangle code I had. Instead I realized how simple the normal calculation is for a cylinder if you consider the the origin 0,0 to be the center of each circular strip. The normal will be = vertex position scaled to length 1. for normals facing in on a cylinder (ie tunnel) the x,y values would be inverted (this is a assuming you are looking down the -z axis).

Categories

Resources