I have the following code in my C file...
static const char gVertexShader[] =
"attribute vec4 vPosition;\n"
"attribute vec4 vid;\n"
"varying vec4 fragColor; \n"
"attribute vec4 inColor; \n"
"void main() {\n"
" gl_Position = vid * vPosition;\n"
" fragColor = inColor; \n"
"}\n";
static const char gFragmentShader[] = "precision mediump float;\n"
"varying vec4 fragColor; \n"
"void main() {\n"
" gl_FragColor = fragColor;\n"
"}\n";
.....
GLuint gvPositionHandle;
GLuint gvColorHandle;
GLuint gvIDHandle;
....
GLfloat id[] = { 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};
....
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0,
gTriangleVertices1);
glVertexAttribPointer(gvColorHandle, 4, GL_FLOAT, GL_FALSE, 0, current1);
glVertexAttribPointer(gvIDHandle, 4, GL_FLOAT, GL_FALSE, 0, id);
checkGlError("glVertexAttribPointer");
glEnableVertexAttribArray(gvPositionHandle);
glEnableVertexAttribArray(gvColorHandle);
glEnableVertexAttribArray(gvIDHandle);
checkGlError("glEnableVertexAttribArray");
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I works fine without the identity matrix....
Then after I add the identity matrix (So I would not expect any change since it is an ID matrix) I see the following...
So this looks really wrong. Can anyone see what I am doing wrong? Is this a column major vs. row major issue or something?
Using mat4 instead of vec4
So I noticed that I should probably be using mat4 for my transformation so I changed it to the following....
static const char gVertexShader[] =
"attribute vec4 vPosition;\n"
"uniform mat4 vid;\n"
"varying vec4 fragColor; \n"
"attribute vec4 inColor; \n"
"void main() {\n"
" gl_Position = vid * vPosition ;\n"
" fragColor = inColor; \n"
"}\n";
GLint location = glGetUniformLocation(gProgram, "vid");
glUniformMatrix4fv(location, 1, false, id);
Now at first this appears to work, however when I change it to something like this...
GLfloat id[] = { 10.0f, 0.0f, 0.0f, 0.0f,
0.0f, 10.0f, 0.0f, 0.0f,
0.0f, 0.0f, 10.0f, 0.0f,
0.0f, 0.0f, 0.0f, 10.0f};
It does not zoom so I have a feeling it is not right yet...
Related
I have successfully bind 2 external OES texture to my shader. Now I want each texture to take 1/2 of the screen(Left for one texture right for another). How do I go about doing it? Example:
http://vicceskep.hu/kepek/vicces_funny_007445.jpg
random image from google
Showing a full picture of each picture. It will be nice to have an efficient method to do it. The code that I am currently referencing from is the bikflake/ grafika code from github.
Visit http://bigflake.com/mediacodec/CameraToMpegTest.java.txt
To check the code out.
Okay I think i will really give in depth clarification for my question as I do not have much knowledge about 3d projections in open GL. Sorry for the numerous edits on the question.
This is my Vertex Shader code currently
private static final String VERTEX_SHADER =
// UMVPMATRIX IS AN IDENTITY MATRIX
"uniform mat4 uMVPMatrix;\n" +
//These are surfacetexture.getTransformationMatrix
"uniform mat4 uSTMatrixOne;\n" +
"uniform mat4 uSTMatrixTwo;\n" +
"attribute vec4 aPosition;\n" +
"attribute vec4 aTextureCoord;\n" +
"varying vec2 vTextureCoord;\n" +
"varying vec2 vTextureCoordTwo;\n" +
"void main() {\n" +
" gl_Position = uMVPMatrix * aPosition;\n" +
" vTextureCoord = (uSTMatrix * aTextureCoord).xy;\n" +
" vTextureCoordTwo = (uSTMatrixTwo* aTextureCoord).xy ;\n" +
"}\n";
This is my Fragment Shader code currently which does a overlay currently.
private static final String FRAGMENT_SHADER =
"#extension GL_OES_EGL_image_external : require\n" +
"precision mediump float;\n" + // highp here doesn't seem to matter
"varying vec2 vTextureCoord;\n" +
"varying vec2 vTextureCoordTwo;\n" +
"uniform samplerExternalOES sTextureOne;\n" +
"uniform samplerExternalOES sTextureTwo;\n" +
"void main() {\n" +
" lowp vec4 pixelTop = texture2D(sTextureOne, vTextureCoord);\n" +
" lowp vec4 pixelBot = texture2D(sTextureTwo, vTextureCoordTwo);" +
" gl_FragColor = pixelTop + pixelBot;\n" +
"}\n";
As for the aPosition and the a texture coordinate it is currently referenced from. It would be nice if someone explained how mTraingleVerticesData works too.
private final float[] mTriangleVerticesData = {
// X, Y, Z, U, V
-1.0f, -1.0f, 0, 0.f, 0.f,
1.0f, -1.0f, 0, 1.f, 0.f,
-1.0f, 1.0f, 0, 0.f, 1.f,
1.0f, 1.0f, 0, 1.f, 1.f,
};
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
checkGlError("glVertexAttribPointer maPosition");
GLES20.glEnableVertexAttribArray(maPositionHandle);
checkGlError("glEnableVertexAttribArray maPositionHandle");
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_UV_OFFSET);
GLES20.glVertexAttribPointer(maTextureHandle, 2, GLES20.GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
My 2 external projection binding currently
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTextureID);
//Cam Code
//Set texture to be active
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTwoTextureID);
You can do it the way #Sung suggested but conditional statements and loops in shaders, especialy fragment, are slow. It's better to render 2 different polygons.
Yea I finally got it.I used 2 different programs to draw and used mTriangleVerticesData to edit the image proportions and use gl_position to shift the image.
I'm having trouble understanding where/how to setup buffers for a native android application in VS 2015. I apologize if this isn't the best way to ask a question. I appreciate any help/insight.
This is what I have so far:
(in engine_init_display)
GLint vShaderLength = vertex_shader.length();
const GLchar* vcode = vertex_shader.c_str();
GLint fShaderLength = fragment_shader.length();
const GLchar* fcode = fragment_shader.c_str();
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vcode, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fcode, NULL);
glCompileShader(fs);
shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
GLint pos_id = glGetAttribLocation(shader_programme, "position");
//Set vertex data
glUseProgram(shader_programme);
glVertexAttribPointer(pos_id, 0, GL_FLOAT, GL_FALSE, 0, 0);
glVertexAttribPointer(pos_id, //GLuint
3, //GLint size
GL_FLOAT, //GLenum type
GL_FALSE, //GLboolean
(sizeof(float) * 5), //GLsizei stride
points //const GLvoid *pointer
);
glEnableVertexAttribArray(pos_id);
(in engine_draw_frame)
glClearColor(1.0f, 0.41f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices);
eglSwapBuffers(engine->display, engine->surface);
With this, I get a pink (clear colour) background. I'm not sure what I'm doing wrong.
Here are my vertex data and shaders
float points[] =
{
-0.2f, 0.6f, 0.0f,
0.0f, 1.0f,
0.5f, 0.5f, 0.0f,
1.0f, 1.0f,
-0.5f, -0.5f, 0.0f,
0.0f, 0.0f,
0.5f, -0.5f, 0.0f,
1.0f, 0.0f
};
unsigned short indices[] =
{
0, 2, 1, 2, 3, 1
};
std::string vertex_shader =
"#version 300 es \n"
"in vec3 position; \n"
"void main () { \n"
" gl_Position = vec4 (position, 1.0); \n"
"} \n";
std::string fragment_shader =
"#version 300 es \n"
"precision highp float; \n"
"out vec4 frag_colour; \n"
"void main () { \n"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0); \n"
"} \n";
OK, I figured it out. There isn't anything wrong with my shaders or vertex array. The problem was that I didn't specify EGL to create an OpenGLES2 context using EGL_CONTEXT_CLIENT_VERSION.
Check here -> Khronos Specification, page 43 (of pdf) for more info
Sample from specification:
EGLContext eglCreateContext(EGLDisplay dpy,
EGLConfig config, EGLContext share_context,
const EGLint *attrib_list);
if the *attrib_list is left null, the default is OpenGLES1 and shaders will not work in that context.
So, what you need to do is create an attribute list. Something along the lines of:
EGLint contextAttributes[]=
{
EGL_CONTEXT_CLIENT_VERSION,2,
EGL_NONE
}
and pass that to the create context
p_context = eglCreateContext(display, config, NULL, contextAttributes);
Basically, I was too unsure of my ability with vertex buffers that I focused on that for a long time.
I want to draw Invisible triangle in OpenGL ES 3.0. I thought making the Alpha channel Zero will do it.
Here is how I am passing my triangle vertices:
In my constructor I am using intialising my Triangle vertices:
final float[] triangle2VerticesData = {
// X, Y, Z,
// R, G, B, A
-0.5f, -0.25f, 0.0f,
1.0f, 1.0f, 0.0f, 0.0f, // Alpha is zero
0.5f, -0.25f, 0.0f,
0.0f, 1.0f, 1.0f, 0.0f, // Alpha is zero
0.0f, 0.559016994f, 0.0f,
1.0f, 0.0f, 1.0f, 0.0f}; // Alpha is zero
Extra Information:
In onSurfaceCreated:
GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
// ... Creating and attaching shaders, creating veiw matrix using Lookat
In onDrawFrame:
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// ... performaing simple rotation and passing MVP to shaders
In Shaders:
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n"
+ "attribute vec4 a_Position; \n"
+ "attribute vec4 a_Color; \n"
+ "varying vec4 v_Color; \n"
+ "void main() \n"
+ "{ \n"
+ " v_Color = a_Color; \n"
+ " gl_Position = u_MVPMatrix \n"
+ " * a_Position; \n"
+ "} \n";
final String fragmentShader =
"precision mediump float; \n"
+ "varying vec4 v_Color; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = v_Color; \n"
+ "}
I am able to change color of the triangle with playing R G B values.
BUT when A(Alpha) is set to 1.0f or 0.0f Does not produce any change in transperancy/invisiblity?
Can anyone tell me where I am goin wrong?
If you want the triangle to be completely invisible, simply don't render it. This is the best approach for performance, and with a little extra logic, should be easy to accomplish.
If you want some transparency, i.e. not completely invisible, then be sure to enable blending. (Plenty of information online about how to do blending. Too much to explain in a post.)
GLES20.glEnable(GLES20.GL_BLEND);
after rendering your transparent objects,
GLES20.glDisable(GLES20.GL_BLEND);
I need to scale an object in OpenGL|ES 2.0. Shaders:
private final String vertexShaderCode =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
//the matrix must be included as a modifier of gl_Position
" gl_Position = vPosition * uMVPMatrix;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
Projection:
Matrix.orthoM(mProjMatrix,0,
-1.0f, // Left
1.0f, // Right
-1.0f / ratio, // Bottom
1.0f / ratio, // Top
0.01f, // Near
10000.0f);
Drawing setup:
// Set the camera position (View matrix)
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
Actual render:
float[] scale = {5f,5f,1f};
Matrix.scaleM(scale_matrix, 0, scale[0], scale[1], scale[2]);
Matrix.multiplyMM(r_matrix, 0, scale_matrix, 0, mMVPMatrix, 0);
// Combine the rotation matrix with the projection and camera view
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, r_matrix, 0);
And it will not scale. I can see the triangle and I can rotate it. But scaling does not work.
Since vectors are column vectors in OpenGL you have to change the order of the matrix multiplication in your vertex shader:
gl_Position = uMVPMatrix*vPosition;
This is supposed to render a cube. It looks like some parts of the rear faces are rendering in front of the ones closest to the camera. This happens even if I set it farther away. This is from my renderer:
public void onDrawFrame(GL10 unused) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3f, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.setRotateM(mModelMatrix, 0, mAngle, 0f, 1f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mVMatrix, 0, mModelMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mMVPMatrix, 0);
// Draw object
cube.draw(mMVPMatrix, context);
mAngle++;
}
and my object's draw method
public void draw(float[] mvpMatrix, Context context) {
GLES20.glUseProgram(mProgram);
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
GLES20.glVertexAttribPointer(mTexHandle, 2,
GLES20.GL_FLOAT, false,
8, textureBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glEnableVertexAttribArray(mTexHandle);
GLES20.glActiveTexture ( GLES20.GL_TEXTURE0 );
GLES20.glBindTexture ( GLES20.GL_TEXTURE_2D, mTextureID);
GLES20.glUniform1i ( mSampler, 0 );
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTexHandle);
}
and my shaders:
String vertexShaderCode =
"uniform mat4 uMVPMatrix;" +
"uniform float u_offset; \n" +
"attribute vec4 a_position; \n" +
"attribute vec2 a_texCoord; \n" +
"varying vec2 v_texCoord; \n" +
"void main() \n" +
"{ \n" +
" gl_Position = uMVPMatrix * a_position; \n" +
" gl_Position.x += u_offset;\n" +
" v_texCoord = a_texCoord; \n" +
"} \n";
String fragmentShaderCode =
"precision mediump float; \n" +
"varying vec2 v_texCoord; \n" +
"uniform sampler2D s_texture; \n" +
"void main() \n" +
"{ \n" +
" gl_FragColor = texture2D(s_texture, v_texCoord); \n" +
"} \n";
and the result
Picture:
http://i.imgur.com/eWI2Uom.png
Thanks
Assuming you're using the depth buffer, you don't seem to be clearing it in your onDrawFrame function. Try:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT|GLES20.GL_DEPTH_BUFFER_BIT);
I'm not sure about Android-specific code as I program on iOS, but it's my understanding from reading "OpenGL ES 2.0 Programming Guide" (by Munshi et al.) that very little differs. Here's what my code looks like from a recent small project:
After setting up your framebuffer and color-renderbuffer, as you've already done, set up the depth buffer.
GLuint depthRenderbuffer;
GLint backingWidth, backingHeight;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
It's a good idea to include the part about setting the depthbuffer dimensions based off the color-renderbuffer, using glGetRenderbufferParameteriv(), because it ensures that no matter what, they're going to match.
One side note, on iOS it's recommended to setup the color-renderbuffer storage directly from the underlying iOS drawing layer, however in setting up the depth-renderbuffer it requires the call to glRenderbufferStorage() instead, as I've shown above.
You'll also want to include the following lines of code to your draw routine:
glClearDepthf(1.0);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
If you don't make the call to glEnable(GL_DEPTH_TEST), it seems to glitch at first and then work just fine, at least on my implementation in iOS. The glClearDepthf(1.0) clears it all the way to the far-plane, as opposed to a value of 0.0 which clears it to the front-plane.
It looks like you may have some Android-specific code, but hopefully this gets you off to the right start. Cheers!