I'm trying to draw multiple hexagons on the screen that have an alpha channel. the image is this:
So, I load the texture into the program and that's ok. When it runs, the alpha channel is blended with the background color and that's ok but, when two hexagons overlap themselves, the overlapped part becomes the color of the background! Below the picture:
Of course, this is not the effect that I expected.. I want them to overlap without this background being drawn over the other texture. Here is my code for drawing:
GLES20.glUseProgram(Program);
hVertex = GLES20.glGetAttribLocation(Program,"vPosition");
hColor = GLES20.glGetUniformLocation(Program, "vColor");
uTexture = GLES20.glGetUniformLocation(Program, "u_Texture");
hTexture = GLES20.glGetAttribLocation(Program, "a_TexCoordinate");
hMatrix = GLES20.glGetUniformLocation(Program, "uMVPMatrix");
GLES20.glVertexAttribPointer(hVertex, 3, GLES20.GL_FLOAT, false, 0, bVertex);
GLES20.glEnableVertexAttribArray(hVertex);
GLES20.glUniform4fv(hColor, 1, Color, 0);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, hTexture);
GLES20.glUniform1i(uTexture, 0);
GLES20.glVertexAttribPointer(hTexture, 2, GLES20.GL_FLOAT, false, 0, bTexture);
GLES20.glEnableVertexAttribArray(hTexture);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glEnable(GLES20.GL_BLEND);
x=-1;y=0;z=0;
for (int i=0;i<10;i++) {
Matrix.setIdentityM(ModelMatrix, 0);
Matrix.translateM(ModelMatrix, 0, x, y, z);
x+=0.6f;
Matrix.multiplyMM(ModelMatrix, 0, ModelMatrix, 0, ProjectionMatrix, 0);
GLES20.glUniformMatrix4fv(hMatrix, 1, false, ModelMatrix, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, DrawOrder.length, GLES20.GL_UNSIGNED_SHORT, bDrawOrder);
}
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisableVertexAttribArray(hVertex);
}
And My fragment shader:
public final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"uniform sampler2D u_Texture;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
" gl_FragColor = vColor * texture2D(u_Texture, v_TexCoordinate);" +
"}";
and my renderer code:
super(context);
setEGLContextClientVersion(2);
getHolder().setFormat(PixelFormat.TRANSLUCENT);
setEGLConfigChooser(8, 8, 8, 8, 8, 8);
renderer = new GLRenderer(context);
setRenderer(renderer);
I already tried to use diferent functions on glBlendFunc but nothing seems to work.. Does Anyone knows what the problem is? I'm really lost.. If needs anymore code just ask!
Thank you!
My guess is that you need to disable the depth test when drawing these. Since they all appear at the same depth, when you draw your leftmost ring, it writes into the depth buffer for every pixel in the quad, even the transparent ones.
Then when you draw the next quad to the right, the pixels which overlap don't get drawn because they fail the depth test, so you just get a blank area where it intersects with the first quad.
Related
I am using code from How can I pass multiple textures to a single shader?. It works fine until I load and bind new textures after my bumpmap image. I only can get this to work if my bumpmap is the last bound and loaded texture. Even if all my images are the same size. Here's my code...
public int[] textureIDs = new int[]{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; //texture image ID's
//load bitmap and bind texture (done 10 times) textureIndex is 1-10
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), imageiD);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureIDs[textureIndex]);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
//render
shaderProgram = GraphicTools.sp_ImageBump;
GLES20.glUseProgram(shaderProgram);
t1 = GLES20.glGetUniformLocation(shaderProgram, "u_texture");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, globals.textureIDs[1]);//textureIndex
GLES20.glUniform1i(t1, 0);
t2 = GLES20.glGetUniformLocation(shaderProgram, "u_bumptex");
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, globals.textureIDs[2]);//bumpMapIndex);
GLES20.glUniform1i(t2, 1);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);//added this, it allows me to pass 2 textures to the shaders, otherwise TEXTURE1 is black
//fragment shader
precision mediump float;
uniform sampler2D u_bumptex;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main()
{
vec4 bumpColor = texture2D(u_bumptex, v_texCoord);//v_bumpCoord);// get bump map color, just use green channel for brightness
gl_FragColor = texture2D(u_texture, v_texCoord) * bumpColor.g;
}
I'm new to OpenGL.
My goal is to work on an alpha video in a OpenGL structure in a textureview.
I started with Video Effects and try to modify some colors (for start: black to transparent) using a custom [EDITED] shader:
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES sTexture;
vec3 first;
vec4 second;
varying vec2 vTextureCoord;
vec2 oTextureCoord;
void main() {
first[0] = 0.0;
first[1] = 0.0;
first[2] = 0.0;
second[0] = 0.0;
second[1] = 1.0;
second[2] = 0.0;
second[3] = 1.0;
vec4 color = texture2D(sTexture, vTextureCoord);
oTextureCoord = vec2(vTextureCoord.x ,vTextureCoord.y + 0.5);
vec4 color2 = texture2D(sTexture, oTextureCoord);
if(vTextureCoord.y < 0.5){
gl_FragColor = vec4(color.r , color.g, color.b, color2.b);
}else{
gl_FragColor = color;
}
But I never saw the background under the view.
After some research I added
GLES20.glEnable(GLES20.GL_BLEND);
But now the "transparent color" is a green one :/
To add more information: the source video is a combination of colored frames on the upper part and the alpha mask on the bottom one.
Am I doing it wrong?
thanks by advance.
[EDIT]
Here is my actual onDrawFame method:
#Override
public void onDrawFrame(GL10 glUnused) {
synchronized (this) {
if (updateSurface) {
mSurface.updateTexImage();
mSurface.getTransformMatrix(mSTMatrix);
updateSurface = false;
}
}
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glClearColor(1.0f,1.0f, 1.0f, 0.0f);
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT
| GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTextureID[0]);
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_POS_OFFSET);
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT,
false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES,
mTriangleVertices);
checkGlError("glVertexAttribPointer maPosition");
GLES20.glEnableVertexAttribArray(maPositionHandle);
checkGlError("glEnableVertexAttribArray maPositionHandle");
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_UV_OFFSET);
GLES20.glVertexAttribPointer(maTextureHandle, 3, GLES20.GL_FLOAT,
false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES,
mTriangleVertices);
checkGlError("glVertexAttribPointer maTextureHandle");
GLES20.glEnableVertexAttribArray(maTextureHandle);
checkGlError("glEnableVertexAttribArray maTextureHandle");
Matrix.setIdentityM(mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(muMVPMatrixHandle, 1, false, mMVPMatrix,
0);
GLES20.glUniformMatrix4fv(muSTMatrixHandle, 1, false, mSTMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
checkGlError("glDrawArrays");
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glFinish();
}
--[EDIT]--
Here is my vertexShader :
uniform mat4 uMVPMatrix;
uniform mat4 uSTMatrix;
attribute vec4 aPosition;
attribute vec4 aTextureCoord;
varying vec2 vTextureCoord
void main() {
gl_Position = uMVPMatrix * aPosition;
vTextureCoord = (uSTMatrix * aTextureCoord).xy;
}
To improve my question here is some details.
Here is my source:
And here is my goal (alpha chanel):
when I let clear color with :
GLES20.glClearColor(1f, 1f, 0.9f, 0f);
There is also the yellow background as you can see ! How can I got a transparent view to see the blue one on this picture?
[ANSWER : HOW TO RESOLVE]
1.First, You need to set your GLSurfaceView to allow Transparency in your constructor as:
this.getHolder().setFormat(PixelFormat.RGB_565);
this.getHolder().setFormat(PixelFormat.TRANSPARENT);
setEGLConfigChooser(8,8,8,8,16,0);
setEGLContextClientVersion(2);// all this before setRenderer()
2.Ask for transparency in your onDrawFrame method :
GLES20.glClearColor(0.0f, 0.0f, 0.0f, .0f);//
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT|GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
For me only black transparent color worked rgba(0,0,0,0).
Thanks again for contributors who helped me.
--[EDIT : REMAINING PROBLEM]--
Here some exemple :
On top it's an GLSurfaceView with an alpha video and down a custom GLTextureView, both are in an horizontalView.
Top appears to be good as I want! But check this when I scroll to right:
Top still appear when bottom hide as it needed!
Just a quick look through your code seems to already have quite a few strange peaces. I wish you included your vertex data (the position and texture coordinates) at least. First please check the line
GLES20.glVertexAttribPointer(maTextureHandle, 3, GLES20.GL_FLOAT,
false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES,
mTriangleVertices);
These look like texture coordinates and you are having 3 dimensions? Is that correct? Fortunately if the stride is correct the dimensions will make no effect but still.
So if I understand that the top part of the video frame represents the color and the bottom does the alpha then your shader makes little sense to me. Could you show the vertex shader as well? Anyway what I would expect you to do is use the texture coordinates in range [0, 1], representing the whole texture. Then in the vertex shader have 2 varying vec2 texture coordinates such that:
varying vec2 colorTextureCoordinates = vec2(textureCoordinates.x, textureCoordinates.y*.5); // representing the top part
varying vec2 alphaTextureCoordinates = vec2(textureCoordinates.x, textureCoordinates.y*.5 + .5); // representing the bottom part
Then in the fragment shader all you do is mix the 2:
vec3 color = texture2D(sTexture, colorTextureCoordinates).rgb;
float alpha = texture2D(sTexture, alphaTextureCoordinates).b; // Why .b?
gl_FragColor = vec4(color, alpha);
Or am I missing something? Why is there an IF statement in your fragment shader that suggest that the top of the texture would use the color and the bottom part would use the color plus the alpha (or the other way around).
I suggest you first try to draw the image you receive from the video with the most common texture shader to confirm the whole frame is being drawn correctly. Then you should check the alpha is working correctly by feeding the semitransparent shader gl_FragColor = vec4(color.rgb, .5). After you confirm both of these produce expected results you should try drawing only the top part of the frame (the color) by using colorTextureCoordinates to get the expected results (the transparent parts should be black). Then check the alpha part by using alphaTextureCoordinates and gl_FragColor = vec4(alpha, alpha, alpha, 1.0) which should produce a grayscale image where white is a full alpha and black is no alpha. After all of these are confirmed try putting it all together as written above.
To make the surface view blend with other views you need to set the transparency and you need to fill the transparent pixels with the transparent color.
Since you are using blend you will need to clear the background to clear color (.0f, .0f, .0f, .0f). Simply disabling the blend should produce the same result in your case.
After doing so you will still see a black background because the surface view will not implicitly blend with other views (the background) and only RGB part is used to fill the rendered view. To enable the transparency you should take a look at some other answers and also this one.
I am currently rendering a camera preview using GL ES 2.0 on android to a SurfaceTexture, rendering it with opengl, then transferring it to a media codec's input surface to for recording. It is displayed to the user in a surface view and by setting that surface view's aspect ratio the camera preview is not distorted based on screen size.
The recording is in portrait, but at some point the incoming texture will start coming in landscape, at which point I'd like to zoom out and display it as a "movie" stretched wide to fit to the edge of the screen horizonatally with black bars on the top and bottom to maintain the aspect ratio of the texture.
The drawing code in onDrawFrame is pretty simple. The link has the rest of the setup code for shaders and the like but it's just setting up a triangle strip to draw.
private final float[] mTriangleVerticesData = {
// X, Y, Z, U, V
-1.f, -1.f, 0, 0.f, 0.f,
1.f, -1.f, 0, 1.f, 0.f,
-1.f, 1.f, 0, 0.f, 1.f,
1.f, 1.f, 0, 1.f, 1.f,
};
public static final String VERTEX_SHADER =
"uniform mat4 uMVPMatrix;\n" +
"uniform mat4 uSTMatrix;\n" +
"attribute vec4 aPosition;\n" +
"attribute vec4 aTextureCoord;\n" +
"varying vec2 vTextureCoord;\n" +
"void main() {\n" +
" gl_Position = uMVPMatrix * aPosition;\n" +
" vTextureCoord = (uSTMatrix * aTextureCoord).xy;\n" +
"}\n";
private float[] mMVPMatrix = new float[16];
private float[] mSTMatrix = new float[16];
public TextureManager() {
mTriangleVertices = ByteBuffer.allocateDirect(
mTriangleVerticesData.length * FLOAT_SIZE_BYTES)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mTriangleVertices.put(mTriangleVerticesData).position(0);
mTriangleHalfVertices = ByteBuffer.allocateDirect(
mTriangleVerticesHalfData.length * FLOAT_SIZE_BYTES)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mTriangleHalfVertices.put(mTriangleVerticesHalfData).position(0);
Matrix.setIdentityM(mSTMatrix, 0);
}
onDrawFrame(){
mSurfaceTexture.getTransformMatrix(mSTMatrix);
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTextureID);
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_POS_OFFSET);
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
checkGlError("glVertexAttribPointer maPosition");
GLES20.glEnableVertexAttribArray(maPositionHandle);
checkGlError("glEnableVertexAttribArray maPositionHandle");
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_UV_OFFSET);
GLES20.glVertexAttribPointer(maTextureHandle, 2, GLES20.GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
checkGlError("glVertexAttribPointer maTextureHandle");
GLES20.glEnableVertexAttribArray(maTextureHandle);
checkGlError("glEnableVertexAttribArray maTextureHandle");
Matrix.setIdentityM(mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(muMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(muSTMatrixHandle, 1, false, mSTMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
checkGlError("glDrawArrays");
GLES20.glFinish();`
}
Things I've tried that haven't quite worked: Scaling mMVPMatrix or mSTMatrix to zoom in. I can zoom in, so I can get the "center slice" of the landscape video to display without distortion, but this cuts off a huge 40% portion of the video, so it isn't a great solution. Zooming out by scaling these matricies causes the texture to repeat the pixel on the edge because of the clamp to edge behavior.
Halving the x,y,z parts of mTriangleVerticesData gives some of the desired behavior as seen in the screenshot below, exact aspect ratio aside. The center part of the picture is halved and centered, as expected. However, the texture is repeated to the left, right, and bottom, and there is distortion to the top left. What I want is the center to be as it is, with black/nothing surrounding it.
I could scale out then translate mMVPMatrix or mSTMatrix and then change my shader to display black for anything outside (0,1) but eventually I want to overlay multiple textures on top of one another, like a full size background and partial size foreground texture. To do this I must eventually figure out how to only display a texture in a portion of the available space, not just manipulate the texture so it looks like that's what's happening.
Thanks for reading all that. Any help, suggestions, or wild guesses are appreciated.
The repeated image chunks look like GPU tiling artifacts, not texture repeating. Add a glClear() call to erase the background.
It would seem what you are looking for is a "fit" system to get the correct frame of your element. It would mean for instance you are having a 100x200 image and you want to display it in a frame of 50x50. The result should then be seeing the whole image in rectangle (25, 0, 25, 50). So the resulting frame must respect the image ratio (25/50 = 100/200) and original frame boundaries must be respected.
To achieve this generally you need to compare the image ratio and the target frame ratio: imageRatio = imageWidth/imageHeight and frameRatio = frameWidth/frameHeight. Then if image ratio is larger then the frame ratio it means you need black borders on top and bottom while if the frame ratio is larger then you will see black borders on left and right side.
So to compute the target frame:
imageRatio = imageWidth/imageHeight
frameRatio = frameWidth/frameHeight
if(imageRatio > frameRatio) {
targetFrame = {0, (frameHeight-frameWidth/imageRatio)*.5, frameWidth, frameWidth/imageRatio} // frame as: {x, y, width, height}
}
else {
targetFrame = {(frameWidth-frameHeight*imageRatio)*.5, 0, frameHeight*imageRatio, frameHeight} // frame as: {x, y, width, height}
}
In your case the image width and height are the ones received from the stream; frame width and height are from your target frame which seems to be a result from matrices but for full screen case that would simply be values from glOrtho if you use it. The target frame should then be used to construct the vertices positions so you get exactly correct vertex data to display the full texture.
I see you use matrices to do all the computation in your case and the same algorithm may be used to be converted to matrix but I discourage you to do so. You seem to be over-abusing matrices which makes your code completely unmaintainable. I suggest in your case you keep to "ortho" projection matrix, use frames to draw textures and only use matrix scale and translations where it makes sense to do so.
EDIT: Right, fixed it :D Issue was that I was trying to set the Projection matrix before calling glUseProgram()
I'm starting out with GL ES 2.0 on Android, and am trying to migrate some of my code over from 1.1 I've defined vertex and frag shaders as per the official docs, and after some googling I understand how the Model/Projection matrices work together, yet I can't seem to get anything but a blank screen.
I'm passing in a model view matrix to my vert shader, and am multiplying it with the ortho projection before multiplying the resulting mvp matrix with the vertex position. Here are my shaders to clarify:
Vertex Shader
attribute vec3 Position;
uniform mat4 Projection;
uniform mat4 ModelView;
void main() {
mat4 mvp = Projection * ModelView;
gl_Position = mvp * vec4(Position.xyz, 1);
}
Fragment Shader
precision mediump float;
uniform vec4 Color;
void main() {
gl_FragColor = Color;
}
I'm building the projection matrix in my renderer's onSurfaceChangedFunction():
int projectionHandle = GLES20.glGetUniformLocation(shaderProg, "Projection");
Matrix.orthoM(projection, 0, -width / 2, width / 2, -height / 2, height / 2, -10, 10);
GLES20.glUniformMatrix4fv(projectionHandle, 1, false, projection, 0);
Then in my onDrawFrame(), I call each actor's draw routine, which looks like
nt positionHandle = GLES20.glGetAttribLocation(Renderer.getShaderProg(), "Position");
nt colorHandle = GLES20.glGetAttribLocation(Renderer.getShaderProg(), "Color");
nt modelHandle = GLES20.glGetUniformLocation(Renderer.getShaderProg(), "ModelView");
float[] modelView = new float[16];
Matrix.setIdentityM(modelView, 0);
Matrix.rotateM(modelView, 0, rotation, 0, 0, 1.0f);
Matrix.translateM(modelView, 0, position.x, position.y, 1.0f);
GLES20.glUniformMatrix4fv(modelHandle, 1, false, modelView, 0);
GLES20.glUniform4fv(colorHandle, 1, color.toFloatArray(), 0);
GLES20.glVertexAttribPointer(positionHandle, 3, GLES20.GL_FLOAT, false, 0, vertBuffer);
GLES20.glEnableVertexAttribArray(positionHandle);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
GLES20.glDisableVertexAttribArray(positionHandle);
I realize that I can optimize this a bit, but I just want to get it to work first. The vertices are in a FloatBuffer, and centered around the origin. Any thoughts on what I am doing wrong? I've been checking my code against various tutorials and SO questions/answers, and can't see what I'm doing wrong.
Right, fixed it :D Issue was that I was trying to set the Projection matrix before calling glUseProgram()
Edit Code added, please see below
Edit 2 - Screenshots from device included at bottom along with explanation
Edit 3 - New code added
I have 2 classes, a rendered and a custom 'quad' class.
I have these declared at class level in my renderer class:
final float[] mMVPMatrix = new float[16];
final float[] mProjMatrix = new float[16];
final float[] mVMatrix = new float[16];
And in my onSurfaceChanged method I have:
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
and....
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// TODO Auto-generated method stub
myBitmap = BitmapFactory.decodeResource(curView.getResources(), R.drawable.box);
//Create new Dot objects
dot1 = new Quad();
dot1.setTexture(curView, myBitmap);
dot1.setSize(300,187); //These numbers are the size but are redundant/not used at the moment.
myBitmap.recycle();
//Set colour to black
GLES20.glClearColor(0, 0, 0, 1);
}
And finally from this class, onDrawFrame:
#Override
public void onDrawFrame(GL10 gl) {
// TODO Auto-generated method stub
//Paint the screen the colour defined in onSurfaceCreated
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix) so looking from the front
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Combine
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
dot1.rotateQuad(0,0,45, mMVPMatrix); //x,y,angle and matrix passed in
}
Then, in my quad class:
This declared at class level:
private float[] mRotationMatrix = new float[16];
private final float[] mMVPMatrix = new float[16];
private final float[] mProjMatrix = new float[16];
private final float[] mVMatrix = new float[16];
private int mMVPMatrixHandle;
private int mPositionHandle;
private int mRotationHandle;
//Create our vertex shader
String strVShader =
"uniform mat4 uMVPMatrix;" +
"uniform mat4 uRotate;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
// "gl_Position = a_position * uRotate;\n"+
// "gl_Position = uRotate * a_position;\n"+
"gl_Position = a_position * uMVPMatrix;\n"+
// "gl_Position = uMVPMatrix * a_position;\n"+
"v_texCoords = a_texCoords;" +
"}";
//Fragment shader
String strFShader =
"precision mediump float;" +
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"}";
Then method for setting texture (don't think this is relevant to this problem though!!)
public void setTexture(GLSurfaceView view, Bitmap imgTexture){
this.imgTexture=imgTexture;
iProgId = Utils.LoadProgram(strVShader, strFShader);
iBaseMap = GLES20.glGetUniformLocation(iProgId, "u_baseMap");
iPosition = GLES20.glGetAttribLocation(iProgId, "a_position");
iTexCoords = GLES20.glGetAttribLocation(iProgId, "a_texCoords");
texID = Utils.LoadTexture(view, imgTexture);
}
And finally, my 'rotateQuad' method (which currently is supposed to draw and rotate the quad).
public void rotateQuad(float x, float y, int angle, float[] mvpMatrix){
Matrix.setRotateM(mRotationMatrix, 0, angle, 0, 0, 0.1f);
// Matrix.translateM(mRotationMatrix, 0, 0, 0, 0); //Removed temporarily
// Combine the rotation matrix with the projection and camera view
Matrix.multiplyMM(mvpMatrix, 0, mRotationMatrix, 0, mvpMatrix, 0);
float[] vertices = {
-.5f,.5f,0, 0,0,
.5f,.5f,0, 1,0,
-.5f,-.5f,0, 0,1,
.5f,-.5f,0, 1,1
};
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
//Bind the correct texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
//Use program
GLES20.glUseProgram(iProgId);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// get handle to shape's rotation matrix
mRotationHandle = GLES20.glGetUniformLocation(iProgId, "uRotate");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mRotationHandle, 1, false, mRotationMatrix, 0);
//Set starting position for vertices
vertexBuf.position(0);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for position
GLES20.glEnableVertexAttribArray(iPosition);
//Set starting position for texture
vertexBuf.position(3);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for texture
GLES20.glEnableVertexAttribArray(iTexCoords);
//Draw it
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
for Edit 2.
This is my quad as drawn in the center of the screen. No rotation.
This is the same quad rotated at +45 Degrees with the code "gl_Position = a_position * uMVPMatrix;" + in my vertex shader (it's from a different project now so the shader variable is a_position and not vPosition), it's looks correct!
However, this is the same quad rotated at +45 Degrees with the 2 shader variables switched (so they read "gl_Position = uMVPMatrix * a_position;" - as you can see, it's not quite right.
Also just a side note, you can't see it here as the quare is symetrical, but each method also rotates in the opposite direction to the other....
Any help appreciated.
It's really impossible to tell because we don't know what you are passing to these two variables.
OpenGL is column-major format, so if vPosition is in fact a vector, and uMVPMatrix is a matrix, then the first option is correct, if this is in your shader.
If this is not in your shader but in your program code, then there is not enough information.
If you are using the first option but getting unexpected results, you are likely not computing your matrix properly or not passing the correct vertices.
Normally in the vertex shader you should multiple the positions by the MVP, that is
gl_Position = uMVPMatrix *vPosition;
When you change the order this should work...
Thanks to all for the help.
I managed to track down the problem (For the most part). I will show what I did.
It was the following line:
Matrix.multiplyMM(mvpMatrix, 0, mvpMatrix, 0, mRotationMatrix, 0);
As you can see I was multiplying the matrices and storing them back into one that I was using in the multiplication.
So I created a new matrix called mvpMatrix2 and stored the results in that. Then passed that to my vertex shader.
//Multiply matrices
Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0);
//get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
//Give to vertex shader variable
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0);
After applying this, there is no distortion (And also, with regards to my other question here Using Matrix. Rotate in OpenGL ES 2.0 I am able to translate the centre of the quad). I say 'for the most part' because however, when I rotate it, it rotates backwards (so if I say rotate +45 degrees, (Clockwise), it actually rotates the quad by -45 degrees (Anit-clockwise).
But hopefully, this will help anyone who has a similar problem in the future.