I want to display an image as texture on a quad with OpenGL ES 2.0 using the Android NDK.
I have the following simple vertex and fragment shader:
#define DISP_SHADER_V_SRC "\
attribute vec4 aPos;\
attribute vec2 aTexCoord;\
varying vec2 vTexCoord;\
void main() {\
gl_Position = aPos;\
vTexCoord = aTexCoord;\
}"
#define DISP_SHADER_F_SRC "\
precision mediump float;\n\
varying vec2 vTexCoord;\n\
uniform sampler2D sTexture;\n\
void main() {\n\
gl_FragColor = texture2D(sTexture, vTexCoord);\n\
}"
At first, a native "create" method is called when the GLSurfaceView is created. It sets the clear color, builds the shader and gets me a texture id using glGenTextures. A "resize" method sets the current view size. Another method sets the texture data like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
I don't believe that there's something wrong there. The important thing should be the "draw" method. After glClear, glViewport and glUseProgram I do the following:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texId);
glEnableVertexAttribArray(shAPos);
glVertexAttribPointer(shAPos, 3, GL_FLOAT, false, 0, quadVertices);
glVertexAttribPointer(shATexCoord, 2, GL_FLOAT, false, 0, quadTexCoordsStd);
glEnableVertexAttribArray(shATexCoord);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// now glDisableVertex...
I can confirm that the shader basically works, since gl_FragColor=vec4(1.0); results in a white screen. It just does not work when I load a texture. I tried out setting the pixel data to "all white" using memset just to confirm that the issue is not related with my image data, but still the screen stays black. What am I missing?
IsaacKleiner's own comment was correct, I have met with the same problem, I develop a android app with OpenGL ES 2.0 in C++, using NDK. The function to load texture was in the C++ part originally. For some reason, that did not work. Only after I move the load texture code to java part , did it work as normal. I load the texture in the java part, bind it as normal, and I pass the texture ID to the C++ JNI code. I do not know why there is such a problem. I can provide with a link, in which there is an example that use OpenGL ES 1.0 and NDK to show a cube. Although it is in Chinese, the code can explain itself. please pay attention to how the texture is generated and how the texture id is passed between Java and C++.
Related
I've been spending the better part of the last 2 days hunting down an OpenGL ES bug I've encountered only on some devices. In detail:
I'm trying to implement skeletal animations, using the following GLSL code:
#ifndef NR_BONES_INC
#define NR_BONES_INC
#ifndef NR_MAX_BONES
#define NR_MAX_BONES 256
#endif
in ivec4 aBoneIds;
in vec4 aBoneWeights;
layout (std140) uniform boneOffsets { mat4 offsets[NR_MAX_BONES]; } bones;
mat4 bones_get_matrix() {
mat4 mat = bones.offsets[aBoneIds.x] * aBoneWeights.x;
mat += bones.offsets[aBoneIds.y] * aBoneWeights.y;
mat += bones.offsets[aBoneIds.z] * aBoneWeights.z;
mat += bones.offsets[aBoneIds.w] * aBoneWeights.w;
return mat;
}
#endif
This is then included in the vertex shader and used as such:
vec4 viewPos = mv * bones_get_matrix() * vec4(aPos, 1.0);
gl_Position = projection * viewPos;
The desired output, achieved for example on the Android Emulator (armv8) running on my M1 MacBook Pro is this:
I can't actually capture the output of the faulting device (Espon Moverio BT-350, Android on x86 Intel Atom) sadly, but it's basically the same picture without head, arms or legs.
The uniform buffer bound to boneOffsets, for testing, is created as a std::vector<glm::mat4> with size 256 and is created/bound as such:
GLint buffer = 0;
std::vector<glm::mat4> testData(256, glm::mat4(1.0));
glGenBuffers(1, &buffer);
glBindBuffer(GL_UNIFORM_BUFFER, buffer);
glBufferData(GL_UNIFORM_BUFFER, sizeof(glm::mat4) * testData.size(), &testData[0], GL_DYNAMIC_DRAW);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, buffer);
glUniformBlockBinding(programId, glGetUniformBlockIndex(programId, "boneOffsets"), 0);
Am I missing a step somewhere in my setup? Is this a GPU bug I'm encountering? Have I misunderstood the std140 layout?
P.S.: After every OpenGL call, I run glGetError(), but nothing shows up. Also nothing in the various info logs for Shaders and Programs.
EDIT
It's the next day, and I've tried skipping the UBO and using a plain uniform array (100 elements instead of 256, my model has 70-ish bones anyway). Same result.
I've also just tried with a "texture". It's a 4*256 GL_RGBAF teture, which is "sampled" as such:
uniform sampler2D bonesTxt;
mat4 read_bones_txt(int id) {
return mat4(
texelFetch(bonesTxt, ivec2(0, id), 0),
texelFetch(bonesTxt, ivec2(1, id), 0),
texelFetch(bonesTxt, ivec2(2, id), 0),
texelFetch(bonesTxt, ivec2(3, id), 0));
}
Still no dice. As a comment suggested, I've checked my bone IDs and Weights. What I send to glBufferData() is ok, but I can't actually check what's on the GPU because I can't get RenderDoc (or anything else) to work on my device.
I finally figured it out.
When binding my bone IDs I used glVertexAttribPointer() instead of glVertexAttribIPointer().
I was sending the correct type (GL_INT) to glVertexAttribPointer(), but I didn't read this line in the docs:
For glVertexAttribPointer() [...] values will be converted to floats [...]
As usual, RTFM people.
I am drawing a triangle in OpenGL like:
MyGLRenderer( )
{
fSampleVertices = ByteBuffer.allocateDirect( fSampleVerticesData.length * 4 )
.order ( ByteOrder.nativeOrder( ) ).asFloatBuffer( );
fSampleVertices.put( fSampleVerticesData ).position ( 0 );
Log.d( TAG, "MyGLRender( )" );
}
private FloatBuffer fSampleVertices;
private final float[] fSampleVerticesData =
{ .8f, .8f, 0.0f, -.8f, .8f, 0.0f, -.8f, -.8f, 0.0f };
public void onDrawFrame( GL10 unused )
{
GLES30.glViewport ( 0, 0, mWidth, mHeight );
GLES30.glClear ( GLES30.GL_COLOR_BUFFER_BIT );
GLES30.glUseProgram ( dProgramObject1 );
GLES30.glVertexAttribPointer ( 0, 3, GLES30.GL_FLOAT, false, 0, fSampleVertices );
GLES30.glEnableVertexAttribArray ( 0 );
GLES30.glDrawArrays( GLES30.GL_TRIANGLES, 0, 3 );
//Log.d( TAG, "onDrawFrame( )" );
}
So since I have experimented with the co-ordinates it doesn't take long to figure out that the visible area of the screen
is between -1,1. So then the triangle takes up 80% of the screen. As well I have determined that the pixel dimensions of my
GLSurfaceView are 2560 in width and 1600 in height.
So then given a triangle with these pixel based co-ordinates (fBoardOuter):
1112.0f
800.0f
0.0f
-1280.0f
800.0f
0.0f
-1280.0f
-800.0f
0.0f
I have to either convert those pixel co-ordinates to something between -1,1 or find out a way to have gl convert those co-ordinates
at the time they are drawn? Since I am very new to OpenGL I am looking for some guidance to do this?
My vertex shader is like:
String sVertexShader1 =
"#version 300 es \n"
+ "in vec4 vPosition; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = vPosition; \n"
+ "} \n";
Would I be correct then in saying that a pixels based system would be called world co-ordinates? What I am trying to do right now is just some 2D drawing for a board game.
I've discovered that Android has this function:
orthoM(float[] m, int mOffset, float left, float right, float bottom, float top, float near, float far)
However there is nothing in the documentation I've read so far that explain the usage of the matrix of how a float[] with pixel co-ordinates can be transformed to normalized co-ordinates with that matrix in GLES30.
I've also found the documentation here:
http://developer.android.com/guide/topics/graphics/opengl.html
Based off the documentation I have tried to create an example:
http://pastebin.com/5PTsfSdz
In the pastebin example fSampleVertices I thought would be much smaller and at the center of the screen but it isn't it's still almost the entire screen and fBoardOuter just shows me a black screen if I try to put it into glDrawArray.
You will probably need to find a book or some good tutorials to get a strong grasp on some of these concepts. But since there some specific items in your question, I'll try and explain them as well as I can within this format.
The coordinate system you discovered, where the range is [-1.0, 1.0] in the x- and y coordinate directions, is officially called Normalized Device Coordinates, often abbreviated as NDC. Which is very similar to the name you came up with, so some of the OpenGL terminology is actually very logical. :)
At least as long as you're dealing with 2D coordinates, this is the coordinate range your vertex shader needs to produce. I.e. the coordinates you assign to the built-in gl_Position variable need to be within this range to be visible in the output. Things gets slightly more complicated if you're dealing with 3D coordinates and are applying perspective projections, but we'll skip over that part for now.
Now, as you already guessed, you have two main options if you want to specify your coordinates in a different coordinate system:
You transform them to NDC in your code before you pass them to OpenGL.
You have OpenGL apply transformations to your input coordinates.
Option 2 is clearly the better one, since GPUs are very efficient at performing this job.
On a very simple level, this means that you modify the coordinates in your vertex shader. If you look at your very simple first vertex shader:
in vec4 vPosition;
void main()
{
gl_Position = vPosition;
}
you get the coordinates provided by your app code in the vPosition input variable, and you assign exactly he same coordinates to the vertex shader output gl_Position.
If you want to use a different coordinate system, you process the input coordinates in the vertex shader code, and assign those processed coordinates to the output instead.
Modern versions of OpenGL don't really have a name for those coordinate systems anymore. There used to be "model coordinates" and "world coordinates" when some of this stuff was still hardwired into a fixed pipeline. Now that this is done with programmable shader code, those concepts are not relevant anymore from the OpenGL point of view. All it cares about are the coordinates that come out of the vertex shader. Everything that happens before that is your own business.
The canonical way of applying linear transformations, which includes the translations and scaling you need for your intended use, is by multiplying the coordinates with a transformation matrix. You already discovered the android.opengl.Matrix package that contains some utility functions for building transformation matrices if you don't want to write the (simple) code yourself.
Once you have a transformation matrix, you pass it into the vertex shader as a uniform variable, and apply the matrix in your shader code. The way this looks in the shader code is for example:
in vec4 vPosition;
uniform mat4 TransformMat;
void main()
{
gl_Position = TransformMat * vPosition;
}
To set the value of this matrix, you need to get the location of the uniform variable once after linking the shader, with prog your shader program:
GLint transformLoc = GLES20.glGetUniformLocation(prog, "TransformMat");
Then, at least once, and every time you want to change the matrix, you call:
GLES20.glUniformMatrix4fv(transformLoc, 1, GL_FALSE, mat, 0);
where mat is the matrix you either built yourself, or got from one of the utility functions in android.opengl.Matrix. Note that this call needs to be after you make the program current with glUseProgram().
Hi this is my first post here :)
I working on Android 2.3 OpenGL ES 2 2D game engine. I try to improve performance by building one big VBO instead of drawing sprite one by one. I using batching to draw 32 sprites at once.
This is my vertex shader definition:
uniform mat4 mvpMatrix[32];
When I use client side memory
for (int k = 0; k < m; k++)
mvpBuffor.put(models[k], 0, models[k].length);
mvpBuffor.position(0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, m, false, mvpBuffor);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, m * VERTEX_PER_SPRITE);
and this works very good
but when I try to use VBO
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, batch.mvpBufferIndex);
GLES20.glEnableVertexAttribArray(mMVPMatrixHandle);
GLES20.glVertexAttribPointer(mMVPMatrixHandle, MVP_SIZE * BATCHSIZE, GLES20.GL_FLOAT, false, 0, 0);
I get error code 1281
Any one knows how to pass array of mat4 uniform to vertex shader ?
If you read this page, it should be something like:
GLES20.glUniformMatrix4fv(<handle>, <number of matrices>, <transposed?>, <array of matrices>);
It looks as though your shader's still declaring the mvpMatrix as uniform
uniform mat4 mvpMatrix[32];
yet you're trying to pass it in as an attribute:
GLES20.glEnableVertexAttribArray(mMVPMatrixHandle);
GLES20.glVertexAttribPointer(mMVPMatrixHandle, MVP_SIZE * BATCHSIZE, GLES20.GL_FLOAT, false, 0, 0);
For some reason in GLES on mobiles uniform arrays are broken
https://www.opengl.org/discussion_boards/showthread.php/176674-Uniform-arrays-in-OpenGLES
I am currently building an app for Android, but have run into some problems with a shader that refuses to render.
Consider the following fragment shader:
uniform vec4 color;
void main(){
gl_FragColor = vec4(1.0);
}
This shader works fine for drawing an object in a solid color (white in this case). The uniform vector color is optimized away, and cannot be found with glGetUniformLocation() (returns -1).
Trying to get the color from the uniform variable can be done like so:
uniform vec4 color;
void main(){
gl_FragColor = color;
}
However, when I use this, nothing renders. The shader is created successfully and glGetUniformLocation() returns a valid value (0) for color. But nothing shows on screen, not even black. The only change made is replacing vec4(1.0) with color.
This code has the same result:
uniform vec4 color;
void main(){
gl_FragColor = vec4(1.0)+color;
}
The strange thing is that when I tried the shader in a different project, it works as it should, so the problem must be something I do elsewhere in the code.
This is my drawing method (keep in mind that it works when the color-variable is not in the shader):
GLES20.glUseProgram(colorshader);
GLES20.glUniform4f(colorIndex, 1, 1, 1, 1); //colorIndex is the result of glGetUniformLocation() called with the correct shader index and variable name.
Matrix.multiplyMM(mvpMatrix, 0, vpMatrix, 0, matrix, 0);
GLES20.glUniformMatrix4fv(matrixindex, 1, false, mvpMatrix, 0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertices);
GLES20Fix.glVertexAttribPointer(Shader.ATTRIBUTE_VERTEX, 3, GLES20.GL_FLOAT, false, 12, 0);
GLES20.glLineWidth(width);
GLES20.GL_UNSIGNED_SHORT, 0);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, count);
I have absolutely no idea what might be causing this odd behaviour, if anyone have any ideas or possible solutions, please help me.
Update:
It seems that using any uniform variable that is not a sampler causes this behaviour in this (and only this) shader.
Update 2
Using glGetError() return error code 502: GL_INVALID_OPERATION.
Ok, so I finally figured it out after a load of testing.
In my code, I was using multiple shaders, but I had accidentally only got the mvpMatrix-uniform for one shader and was using that one for every shader, which funnily enough worked for all shaders I had before creating this one (it got id 0 for all the earlier shaders).
However with this shader it seems that my new vec4 got id 0, which caused the code to wrongly assign my vector data to the mvp-matrix. getting new ids for each shader made the program work.
color is uninitialized, and most likely the shader aborts.
Here is a shot of it working on a Xoom (Sorry for the sideways)
And what it looks like on a Nexus S
My best guess at the moment is that I'm not enabling something that needs to be enabled on the nexus s, but is not necessary on the xoom (or automatically enabled by the tegra opengl driver). I can tell you it's definitely not TEXTURE_2D or GL_BLEND, I'm not sure what else it could be.
I'm also disabling the depth test, for these sprites, if that's relevant.
As for relevent code, I'll do my best.
Vertex Shader
precision highp float;
attribute vec4 aPosition;
uniform mat4 uProjection;
uniform mat4 uView;
void main() {
gl_Position = uView*aPosition*uProjection;
gl_PointSize = 100.0;
}
Fragment Shader
precision mediump float;
uniform sampler2D sTexture;
void main() {
gl_FragColor = texture2D(sTexture, gl_PointCoord);
}
Render Routine for the Point Sprites (obviously the geometry is kosher)
GLES20.glEnable(GLES20.GL_TEXTURE_2D);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Matrix.setIdentityM(model, 0);
mProgram.LoadProgram();
GLES20.glEnableVertexAttribArray(mProgram.getHandle("aPosition"));
GLES20.glVertexAttribPointer(mProgram.getHandle("aPosition"), 3, GLES20.GL_FLOAT, false, 0, mPoints);
GLES20.glUniformMatrix4fv(mProgram.getHandle("uView"), 1, false, App.getInstance().currentGameState.mCamera.view, 0);
TextureManager.loadTexture(mTexture, 0);
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, (int) currentMarkers);
Es2Utils.checkGlError("Error With renderMarkers");
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_BLEND);
The mProgram.LoadProgram() is the wrapper for my shader, it's used on like 5 other shaders and not really a problem, in this case it also does this.
if (programID != 0) {
GLES20.glUseProgram(programID);
}
Es2Utils.checkGlError("1");
if (Es2Utils.checkGlError("GLES20.glUseProgram();") || programID == 0) {
CompileProgram();
GLES20.glUseProgram(programID);
}
Es2Utils.checkGlError("2");
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
Es2Utils.checkGlError("3");
GLES20.glUniformMatrix4fv(getHandle("uProjection"), 1, false, Projections.getPerspective(), 0);
Es2Utils.checkGlError("4");
Divide the problem up into two possible conditions:
a) The texture2D call is returning vec4(0.,0.,0.,1.)
b) ...somewhere after the fragment shader...
You can rule out b by changing your shader to "gl_FragColor = vec4(1.,0.,0.,1.);" or something. This case is less likely, but it's worth a try, just to make sure you're looking in the right spot.
Are your textures for the point power of 2? If not, are your wrapping modes set to GL_CLAMP_EDGE? On the SGX540 you can't wrap non-PoT textures. Also, I think you need to set the wrap mode before calling glTexImage2d.
Just a heads up, phones with qualcomm chips clip points when the center goes offscreen. So when the center of your target goes off the screen the entire target will disappear. For this reason I avoid using large points.