Understanding buffer in OpenGLES android - android

I have this vertex shader that i'm use to rotate or translate the obj depending of state values.
private final String vertexShader="" +
"attribute vec4 vertex;" +
"attribute vec4 colors;" +
"varying vec4 vcolors;" +
"uniform int x;" +
"uniform vec4 translate;"+
"uniform mat4 rotate;"+
"void main(){" +
"vec4 app_verte=vertex;" +
"if(x==1){" +
"app_verte=vertex+translate;" +
"}else if(x==2){" +
"app_verte=rotate*app_verte;" +
"}" +
"vcolors=colors;" +
"gl_Position=app_verte;" +
"}";
For the rotation i use the a matrix that using the matrix associeted is built from a float[16] array as follow:
|cos(angle),-sin(angle),0,0|
|sin(angle), cos(angle),0,0|
|0 , ,0,0|
Now i have different questions becouse i really hard understand. If i want to change the type of transformation i have to set the x value. Now to have a continius transformation i supposed that the vertex buffere will be the same and after a transformation the value of the buffer will be changed. Now nothing happend becouse it transform and draw with the same coordiates. i put only at first the coordinatesbuffer. There is a way to use the same buffer that is the the VRAM without put it every time and if there is not how can a pull the changed buffer to my buffer obj after the tranformation without transform the point using the array and put it into the buffer??
Sorry for my english and thanks to all indeed.

The vertex buffers are designed this way so you send them to the GPU only once to reduce the traffic. You then use matrices (or other systems such as translation vector) to apply the transform in the vertex shader so you send only up to 4x4 float buffer.
IF I understand what your issue is it lies in that you use multiple systems at the same time. You have a translation vector and a matrix but you use either one or the other. So in your case you might be able to simply apply both of them in the shader as app_verte = rotate*app_verte + translate; or app_verte = rotate*(app_verte + translate); already these 2 are not the same and I am guessing that at some point you will need something like app_verte = rotate2*(rotate1*app_verte + translate1) + translate2; which is not solvable since the number of operations will increase over time.
So you need to chose a single system which in your case should be a matrix. Rather then sending the translation matrix you can translate the matrix on the CPU part of your application and send only that to the shader. You need to find tools to multiply the matrices and to generate translation and rotation matrix. You can make them yourself but already looking at the one you posted I am pretty sure the second last value should be 1 and not 0 (though it must be a typo since the last row contains 3 values while others contain 4).
So have a single matrix which in beginning is set to identity which corresponds to x=0. Then for x=1 situation set that matrix as a translation matrix myMatrix = generateTranslationMatrix(x, y, z). And for x=2 do the same with rotation matrix myMatrix = generateRotationMatrix(x, y, z, angle). Now when you need to continue the operation, to concat the two you simply multiply them, so for both you would do myMatrix = generateTranslationMatrix(x, y, z)*generateRotationMatrix(x, y, z, angle). But there is no reason to keep the values separate as well so in the end you just want some methods to manipulate the state of the orientation:
Matrix myMatrix;
onLoad() {
myMatrix = Matrix.identity();
}
onTurnRight(float angle) {
myMatrix = myMatrix * generateRotationMatrix(0, 1, 0, angle);
}
onMove(float x, float y, float z) {
myMatrix = myMatrix * generateTranslationMatrix(x, y, z);
}
Then you can add other methods to your code as needed but for instance if you handle touch events and when a finger moves up or down you will move forward or backwards while left and right will rotate the object then it will look something like this:
onFingerMoved(float x, float y) {
const float xfactor = 0.01; // Modify to control the speed of rotation
const float yfactor = -0.1; // Modify to control the speed of movement
float dx = previousX - x;
float dy = previousy - y;
onTurnRight(dx);
onMove(.0, .0, dy); // Assuming the Z coordinate is forward
previousX = x;
previousY = y;
}

Related

opengl object vibrate after moving a distance

I have an object which moves on a terrain and a third person camera follow it, after I move it for some distance in different directions it begin to shaking or vibrating even if it is not moving and the camera rotates around it, this is the moving code of the object
double& delta = engine.getDeltaTime();
GLfloat velocity = delta * movementSpeed;
glm::vec3 t(glm::vec3(0, 0, 1) * (velocity * 3.0f));
//translate the objet atri before rendering
matrix = glm::translate(matrix, t);
//get the forward vetor of the matrix
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f = glm::normalize(f);
f = f * (velocity * 3.0f);
f = -f;
camera.translate(f);
and the camera rotation is
void Camera::rotate(GLfloat xoffset, GLfloat yoffset, glm::vec3& c, double& delta, GLboolean constrainpitch) {
xoffset *= (delta * this->rotSpeed);
yoffset *= (delta * this->rotSpeed);
pitch += yoffset;
yaw += xoffset;
if (constrainpitch) {
if (pitch >= maxPitch) {
pitch = maxPitch;
yoffset = 0;
}
if (pitch <= minPitch) {
pitch = minPitch;
yoffset = 0;
}
}
glm::quat Qx(glm::angleAxis(glm::radians(yoffset), glm::vec3(1.0f, 0.0f, 0.0f)));
glm::quat Qy(glm::angleAxis(glm::radians(xoffset), glm::vec3(0.0f, 1.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
glm::mat4 rotY = glm::mat4_cast(Qy);
view = glm::translate(view, c);
view = rotX * view;
view = view * rotY;
view = glm::translate(view, -c);
}
float is sometimes not enough.
I use double precision matrices on CPU side to avoid such problems. But as you are on Android it might not be possible. For GPU use floats again as there are no 64bit interpolators yet.
Big numbers are usually the problem
If your world is big then you are passing big numbers into the equations multiplying any errors and only at the final stage the stuff is translated relative to camera position meaning the errors stay multiplied but the numbers got clamped so error/data ratio got big.
To lower this problem before rendering convert all vertexes to coordinate system with origin at or near your camera. You can ignore rotations just offset the positions.
This way you will got higher errors only far away from camera which is with perspective not visible anyway... For more info see:
ray and ellipsoid intersection accuracy improvement
Use cumulative transform matrix instead of Euler angles
for more info see Understanding 4x4 homogenous transform matrices and all the links at bottom of that answer.
This sounds like a numerical effect to me. Even small offsets coming from your game object will influence the rotation of the following camera with small movements / rotations and it looks like a vibrating object / camera.
So what you can do is:
Check if the movement above a threshold value before calculating a new rotation for your camera
When you are above this threshold: do a linear interpolation between the old and the new rotation using the lerp-algorithm for the quaternion ( see this unity answer to get a better understanding how your code can look like: Unity lerp discussion )

How to use OpenGL to emulate OpenCV's warpPerspective functionality (perspective transform)

I've done image warping using OpenCV in Python and C++, see the Coca Cola logo warped in place in the corners I had selected:
Using the following images:
and this:
Full album with transition pics and description here
I need to do exactly this, but in OpenGL. I'll have:
Corners inside which I've to map the warped image
A homography matrix that maps the transformation of the logo image
into the logo image you see inside the final image (using OpenCV's
warpPerspective), something like this:
[[ 2.59952324e+00, 3.33170976e-01, -2.17014066e+02],
[ 8.64133587e-01, 1.82580111e+00, -3.20053715e+02],
[ 2.78910149e-03, 4.47911310e-05, 1.00000000e+00]]
Main image (the running track image here)
Overlay image (the Coca Cola image here)
Is it possible ? I've read a lot and started OpenGL basics tutorials, but can it be done from just what I have? Would the OpenGL implementation be faster, say, around ~10ms?
I'm currently playing with this tutorial here:
http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html
Am I going in the right direction? Total OpenGL newbie here, please bear. Thanks.
After trying a number of solutions proposed here and elsewhere, I ended solving this by writing a fragment shader that replicates what 'warpPerspective' does.
The fragment shader code looks something like:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
// NOTE: you will need to pass the INVERSE of the homography matrix, as well as
// the width and height of your image as uniforms!
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
// Texture coordinates will run [0,1],[0,1];
// Convert to "real world" coordinates
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
// Determine what 'z' is
highp vec3 m = inverseHomographyMatrix[2] * frameCoordinate;
highp float zed = 1.0 / (m.x + m.y + m.z);
frameCoordinate = frameCoordinate * zed;
// Determine translated x and y coordinates
highp float xTrans = inverseHomographyMatrix[0][0] * frameCoordinate.x + inverseHomographyMatrix[0][1] * frameCoordinate.y + inverseHomographyMatrix[0][2] * frameCoordinate.z;
highp float yTrans = inverseHomographyMatrix[1][0] * frameCoordinate.x + inverseHomographyMatrix[1][1] * frameCoordinate.y + inverseHomographyMatrix[1][2] * frameCoordinate.z;
// Normalize back to [0,1],[0,1] space
highp vec2 coords = vec2(xTrans / width, yTrans / height);
// Sample the texture if we're mapping within the image, otherwise set color to black
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
}
Note that the homography matrix we are passing in here is the INVERSE HOMOGRAPHY MATRIX! You have to invert the homography matrix that you would pass into 'warpPerspective'- otherwise this code will not work.
The vertex shader does nothing but pass through the coordinates:
// Vertex shader
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main() {
// Nothing happens in the vertex shader
textureCoordinate = inputTextureCoordinate.xy;
gl_Position = position;
}
Pass in unaltered texture coordinates and position coordinates (i.e. textureCoordinates = [(0,0),(0,1),(1,0),(1,1)] and positionCoordinates = [(-1,-1),(-1,1),(1,-1),(1,1)], for a triangle strip), and this should work!
You can do perspective warping of the texture using texture2DProj(), or alternatively using texture2D() by dividing the st coordinates of the texture (which is what texture2DProj does).
Have a look here: Perspective correct texturing of trapezoid in OpenGL ES 2.0.
warpPerspective projects the (x,y,1) coordinate with the matrix and then divides (u,v) by w, like texture2DProj(). You'll have to modify the matrix so the resulting coordinates are properly normalised.
In terms of performance, if you want to read the data back to the CPU your bottleneck is glReadPixels. How long it will take depends on your device. If you're just displaying, the OpenGL ES calls will take much less than 10ms, assuming that you have both textures loaded to GPU memory.
[edit] This worked on my Galaxy S9 but on my car's Android it had an issue that the whole output texture was white. I've sticked to the original shader and it works :)
You can use mat3*vec3 ops in the fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
highp vec3 trans = inverseHomographyMatrix * frameCoordinate;
highp vec2 coords = vec2(trans.x / width, trans.y / height) / trans.z;
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
};
If you want to have transparent background don't forget to add
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glBlendEquation(GLES20.GL_FUNC_ADD);
And set transpose flag (in case you use the above shader):
GLES20.glUniformMatrix3fv(H_P2D, 1, true, homography, 0);

Get quaternion from Android gyroscope?

The official development documentation suggests the following way of obtaining the quaternion from the 3D rotation rate vector (wx, wy, wz).
// Create a constant to convert nanoseconds to seconds.
private static final float NS2S = 1.0f / 1000000000.0f;
private final float[] deltaRotationVector = new float[4]();
private float timestamp;
public void onSensorChanged(SensorEvent event) {
// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0) {
final float dT = (event.timestamp - timestamp) * NS2S;
// Axis of the rotation sample, not normalized yet.
float axisX = event.values[0];
float axisY = event.values[1];
float axisZ = event.values[2];
// Calculate the angular speed of the sample
float omegaMagnitude = sqrt(axisX*axisX + axisY*axisY + axisZ*axisZ);
// Normalize the rotation vector if it's big enough to get the axis
// (that is, EPSILON should represent your maximum allowable margin of error)
if (omegaMagnitude > EPSILON) {
axisX /= omegaMagnitude;
axisY /= omegaMagnitude;
axisZ /= omegaMagnitude;
}
// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
float thetaOverTwo = omegaMagnitude * dT / 2.0f;
float sinThetaOverTwo = sin(thetaOverTwo);
float cosThetaOverTwo = cos(thetaOverTwo);
deltaRotationVector[0] = sinThetaOverTwo * axisX;
deltaRotationVector[1] = sinThetaOverTwo * axisY;
deltaRotationVector[2] = sinThetaOverTwo * axisZ;
deltaRotationVector[3] = cosThetaOverTwo;
}
timestamp = event.timestamp;
float[] deltaRotationMatrix = new float[9];
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
// User code should concatenate the delta rotation we computed with the current rotation
// in order to get the updated rotation.
// rotationCurrent = rotationCurrent * deltaRotationMatrix;
}
}
My question is:
It is quite different from the acceleration case, where computing the resultant acceleration using the accelerations ALONG the 3 axes makes sense.
I am really confused why the resultant rotation rate can also be computed with the sub-rotation rates AROUND the 3 axes. It does not make sense to me.
Why would this method - finding the composite rotation rate magnitude - even work?
Since your title does not really match your questions, I'm trying to answer as much as I can.
Gyroscopes don't give an absolute orientation (as the ROTATION_VECTOR) but only rotational velocities around those axis they are built to 'rotate' around. This is due to the design and construction of a gyroscope. Imagine the construction below. The golden thing is rotating and due to the laws of physics it does not want to change its rotation. Now you can rotate the frame and measure these rotations.
Now if you want to obtain something as the 'current rotational state' from the Gyroscope, you will have to start with an initial rotation, call it q0 and constantly add those tiny little rotational differences that the gyroscope is measuring around the axis to it: q1 = q0 + gyro0, q2 = q1 + gyro1, ...
In other words: The Gyroscope gives you the difference it has rotated around the three constructed axis, so you are not composing absolute values but small deltas.
Now this is very general and leaves a couple of questions unanswered:
Where do I get an initial position from? Answer: Have a look at the Rotation Vector Sensor - you can use the Quaternion obtained from there as an initialisation
How to 'sum' q and gyro?
Depending on the current representation of a rotation: If you use a rotation matrix, a simple matrix multiplication should do the job, as suggested in the comments (note that this matrix-multiplication implementation is not efficient!):
/**
* Performs naiv n^3 matrix multiplication and returns C = A * B
*
* #param A Matrix in the array form (e.g. 3x3 => 9 values)
* #param B Matrix in the array form (e.g. 3x3 => 9 values)
* #return A * B
*/
public float[] naivMatrixMultiply(float[] B, float[] A) {
int mA, nA, mB, nB;
mA = nA = (int) Math.sqrt(A.length);
mB = nB = (int) Math.sqrt(B.length);
if (nA != mB)
throw new RuntimeException("Illegal matrix dimensions.");
float[] C = new float[mA * nB];
for (int i = 0; i < mA; i++)
for (int j = 0; j < nB; j++)
for (int k = 0; k < nA; k++)
C[i + nA * j] += (A[i + nA * k] * B[k + nB * j]);
return C;
}
To use this method, imagine that mRotationMatrix holds the current state, these two lines do the job:
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
mRotationMatrix = naivMatrixMultiply(mRotationMatrix, deltaRotationMatrix);
// Apply rotation matrix in OpenGL
gl.glMultMatrixf(mRotationMatrix, 0);
If you chose to use Quaternions, imagine again that mQuaternion contains the current state:
// Perform Quaternion multiplication
mQuaternion.multiplyByQuat(deltaRotationVector);
// Apply Quaternion in OpenGL
gl.glRotatef((float) (2.0f * Math.acos(mQuaternion.getW()) * 180.0f / Math.PI),mQuaternion.getX(),mQuaternion.getY(), mQuaternion.getZ());
Quaternion multiplication is described here - equation (23). Make sure, you apply the multiplication correctly, since it is not commutative!
If you want to simply know rotation of your device (I assume this is what you ultimately want) I strongly recommend the ROTATION_VECTOR-Sensor. On the other hand Gyroscopes are quite precise for measuring rotational velocity and have a very good dynamic response, but suffer from drift and don't give you an absolute orientation (to magnetic north or according to gravity).
UPDATE: If you want to see a full example, you can download the source-code for a simple demo-app from https://bitbucket.org/apacha/sensor-fusion-demo.
Makes sense to me. Acceleration sensors typically work by having some measurable quantity change when force is applied to the axis being measured. E.g. if gravity is pulling down on the sensor measuring that axis, it conducts electricity better. So now you can tell how hard gravity, or acceleration in some direction, is pulling. Easy.
Meanwhile gyros are things that spin (OK, or bounce back and forth in a straight line like a tweaked diving board). The gyro is spinning, now you spin, the gyro is going to look like it is spinning faster or slower depending on the direction you spun. Or if you try to move it, it will resist and try to keep going the way it is going. So you just get a rotation change out of measuring it. Then you have to figure out the force from the change by integrating all the changes over the amount of time.
Typically none of these things are one sensor either. They are often 3 different sensors all arranged perpendicular to each other, and measuring a different axis. Sometimes all the sensors are on the same chip, but they are still different things on the chip measured separately.

Simple textured quad rotation in OpenGL ES 2.0

Edit 6 - Complete re-write in relation to comments/ongoing research
Edit 7 - Added projection / view matrix.....
As I'm not getting far with this, I added view/projection matrix from the Google demo - please see code below: If anyone can point out where I'm going wrong it really would be appreciated, as I'm still getting a blank screen when I put ""gl_position = a_position * uMVPMatrix;" + into my vertex shader (with "gl_position = a_position;" + my quad is displayed at least.......)
Declared at class level: (Quad class)
private final float[] rotationMat = new float[16];
private FloatBuffer flotRotBuf;
ByteBuffer rotBuf;
private int muRotationHandle = -1; // Handle to the rotation matrix in the vertex shader called "uRotate"
Declared at class lever: (Renderer class)
private final float[] mVMatrix = new float[16];
private final float[] mProjMatrix = new float[16];
private final float[] mMVPMatrix = new float[16];
Routine that sets texture and does (or is supposed to do) rotation (This is in my Quad class
public void setTexture(GLSurfaceView view, Bitmap imgTexture, float[] mvpMatrix){
this.imgTexture=imgTexture;
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Matrix.setRotateM(rotationMat, 0, 45f, 0, 0, 1.0f); //Set rotation matrix with angle and (z) axis
// rotBuf = ByteBuffer.allocateDirect(rotationMat.length * 4);
// use the device hardware's native byte order
// rotBuf.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
// flotRotBuf = rotBuf.asFloatBuffer();
// add the coordinates to the FloatBuffer
// flotRotBuf.put(rotationMat);
// set the buffer to read the first coordinate
// flotRotBuf.position(0);
// muRotationHandle = GLES20.glGetUniformLocation(iProgId, "uRotation"); // grab the variable from the shader
// GLES20.glUniformMatrix4fv(muRotationHandle, 1, false, flotRotBuf); //Pass floatbuffer contraining rotation matrix info into vertex shader
//GLES20.glUniformMatrix4fv(muRotationHandle, 1, false, rotationMat, 1); //Also tried this ,not use floatbuffer
//Vertex shader
String strVShader =
// "uniform mat4 uRotation;" +
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
"gl_Position = a_Position * uMVPMatrix;"+ //This is where it all goes wrong....
"v_texCoords = a_texCoords;" +
"}";
//Fragment shader
String strFShader =
"precision mediump float;" +
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"}";
iProgId = Utils.LoadProgram(strVShader, strFShader);
iBaseMap = GLES20.glGetUniformLocation(iProgId, "u_baseMap");
iPosition = GLES20.glGetAttribLocation(iProgId, "a_position");
iTexCoords = GLES20.glGetAttribLocation(iProgId, "a_texCoords");
texID = Utils.LoadTexture(view, imgTexture);
}
From my renderer class:
public void onSurfaceChanged(GL10 gl, int width, int height) {
// TODO Auto-generated method stub
//Set viewport size based on screen dimensions
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 gl) {
// TODO Auto-generated method stub
//Paint the screen the colour defined in onSurfaceCreated
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
quad1.setTexture(curView, myBitmap, mMVPMatrix); //SetTexture now modified to take a float array (See above) - Note I know it's not a good idea to have this in my onDrawFrame method - will move it once I have it working!
quad1.drawBackground();
}
I've now removed all rotation related stuff and am now just attempting to get a static quad to display after applying the uMVPMatrix in the vertex shader. But still nothing :-(
If I simply change that line back to the 'default' :
"gl_Position = a_position;\n"+
Then I at least get my textured quad displayed (Obviously no rotation and I would expect that).
Also just to point out, that mvpMatrix is definately being received intact into the setTexture method is valid (contains the same data as appears when I log the contents of mvpMatrix from the Google developers code). I'm not sure how to check if the shader is receiving it intact? I have no reason to believe it isn't though.
Really do appreciate and and all help - I must be going very wrong somewhere but I just can't spot it. Thank you!
EDIT 2: Having added a bounty to this question, I would just like to know how how to rotate my textured quad sprite (2D) keeping the code I have to render it as a base. (ie, what do I need to add to it in order to rotate and why). Thanks!
EDIT 3 N/A
EDIT 4 Re-worded / simplified question
EDIT 5 Added error screenshot
Edit: Edited to support Java using Android SDK.
As Tobias indicated, the idiomatic solution to any vertex transformation in OpenGL is accomplished through the use of matrix operations. If you plan to continue developing with OpenGL, it is important that you (eventually) understand the underlying linear algebra involved in matrix operations, but it is often best to utilize a math library for abstracting linear algebra computation into a more readable format. Under the android environment, you should manipulate float arrays with the [matrix][1] class to create a rotation matrix like this:
// initialize rotation matrix
float[16] rotationMat;
Matrix.setIdentityM(rotationMat,0);
// angle in degrees to rotate
float angle = 90;
// axis to rotate about (z axis in your case)
float[3] axis = { 0.0,0.0,1.0};
// For your case, rotate angle (in degrees) about the z axis.
Matrix.rotateM(rotationMat,0,angle,axis[0],axis[1],axis[2]);
Then you can bind the rotation Matrix to a shader program like this:
// assuming shader program is currently bound ...
GLES20.glUniformMatrix4fv(GLES20.glGetUniformLocation(shaderProgramID, "uRotation"), 1, GL_FALSE, rotationMat);
Where your vertex shader (of the program being passed rotationMat) would look something like:
precision mediump float;
uniform mat4 uMVPMatrix;
uniform mat4 uRotation;
attribute vec2 a_texCoords;
attribute vec3 a_position;
varying v_texCoord;
void main(void)
{
v_texCoord = a_texCoords;
gl_Position = uMVPMatrix* uRotation * vec4(a_position,1.0f);
}
Alternatively, you could premultiply uMVPMatrix* uRotation outside of this shader program and pass the result to your shader program to avoid excessive duplicate computation.
Once you are comfortable using this higher level API for matrix operations you can investigate how the internal operation is performed by reading this fantastic tutorial written by Nicol Bolas.
Rotation matrix for rotation around z:
cos a -sin a 0
sin a cos a 0
0 0 1
How to remember how to construct it:
a is the angle in radians, for a = 0 the matrix yields the identity-matrix. cos has to be on the diagonal. There has to be one sign in front of one sin, switching the signs inverses the rotation's direction.
Likewise rotations around x and y can be constructed:
1 0 0
0 cos a sin a
0 -sin a cos a
cos a 0 sin a
0 1 0
-sin a 0 cos a
If you are not familiar with matrix-arithmetic, here is some code:
for (int i=0; i<4; i++) {
vertices_new[i*5+0] = cos(a) * vertices[i*5+0] - sin(a) * vertices[i*5+1]; // cos(a) * v[i].x - sin(a) * v[i].y + 0 * v[i].z
vertices_new[i*5+1] = sin(a) * vertices[i*5+0] + cos(a) * vertices[i*5+1]; // sin(a) * v[i].x + cos(a) * v[i].y + 0 * v[i].z
vertices_new[i*5+2] = vertices[i*5+2]; // 0 * v[i].x + 0 * v[i].y + 1 * v[i].z
vertices_new[i*5+3] = vertices[i*5+3]; // copy texture u
vertices_new[i*5+4] = vertices[i*5+4]; // copy texture v
}

Android OpenGL ES 2.0 Vignette Mask

I am on Android API Level 9. I have a Camera preview loaded into a SurfaceView. I am trying to draw a vignette mask over this. In order to do so I am using a GLSurfaceView. I prepared a mask in XCode shader builder using the following fragment shader code (or is it a pixel shader?) which compiles successfully so far:
uniform sampler2D tex;
void main()
{
float innerAlpha = 0.0;
float outerAlpha = 1.0;
float len = 1.7;
float startAdjustment = -0.2;
float diff = 0.4;
float alphaStep = outerAlpha / len;
vec2 center = vec2(0.5, 0.5);
vec2 foc1 = vec2(diff,0.);
vec2 foc2 = vec2(-diff,0.);
float r = distance(center+foc1,gl_TexCoord[0].xy) + distance(center+foc2,gl_TexCoord[0].xy);
float alpha = r - (diff * 2.0) * alphaStep - startAdjustment;
vec4 vColor = vec4(0.,0.,0., innerAlpha + alpha);
gl_FragColor = vColor;
}
However, I do not know how to implement this into code for Android. Basically I think I would need to create a rectangle, which would cover the whole view and apply this kind of code generated texture on it. I just can not manage to figure out the actual code. Ideally, it should be in OpenGL ES 2.0.
Edit1:
#Tim - I tried to follow the tutorials here http://developer.android.com/training/graphics/opengl/draw.html
and here
http://www.learnopengles.com/android-lesson-one-getting-started/
and I basically understand, how to draw a triangle. But I do not understand, how to draw rectangle - I mean do I really need to draw two triangles actually or can I just define rectangle (or other complex shapes) right away?
As for the textures - in all tutorials I have seen, textures are actually being loaded from image files, but I would be interested in knowing, how can I actually kind of generate one using the pixel shader above.
Meanwhile, I have found the answer, how to draw the oval shaped mask.
Actually, the problem was, that I was thinking of gl_FragCoord in the range of 0.0 to 1.0,
but they have to be specified in actual pixels instead, e.g. 600.0 x 900.0 etc.
With little tweaks (changing vec2's to floats) I have been able to draw nice oval shaped mask over the whole screen in OpenGL. Here is the final fragment shader. Note, that you must specify the uniforms before drawing. If you are gonna try this, make sure to keep uSlope to somewhere between 0.1 and 2.0 to get meaningful results. Also, please, note that uInnerAlpha has to be lower than uOuterAlpha with this particular piece of code. For a typical vignette,
uInnerAlpha is 0.0 and uOuterAlpha is 1.0.
precision mediump float;
uniform float uWidth;
uniform float uHeight;
uniform float uSlope;
uniform float uStartAdjustment;
uniform float uEllipseLength;
uniform float uInnerAlpha;
uniform float uOuterAlpha;
void main() {
float gradientLength = uHeight * uSlope;
float alphaStep = uOuterAlpha / gradientLength;
float x1 = (uWidth / 2.0);
float y1 = (uHeight / 2.0) - uEllipseLength;
float x2 = (uWidth / 2.0);
float y2 = (uHeight / 2.0) + uEllipseLength;
float dist1 = sqrt(pow(abs(gl_FragCoord.x - x1), 2.0) + pow(abs(gl_FragCoord.y - y1), 2.0));
float dist2 = sqrt(pow(abs(gl_FragCoord.x - x2), 2.0) + pow(abs(gl_FragCoord.y - y2), 2.0));
float dist = (dist1 + dist2);
float alpha = ((dist - (uEllipseLength * 2.0)) * alphaStep - uStartAdjustment) + uInnerAlpha;
if (alpha > uOuterAlpha) {
alpha = uOuterAlpha;
}
if (alpha < uInnerAlpha) {
alpha = uInnerAlpha;
}
vec4 newColor = vec4(1.0, 1.0, 1.0, alpha);
gl_FragColor = newColor;
}

Categories

Resources