I currently have allocated an immutable texture using OpenGL ES 3.1 on Android using the Java bindings like this:
GLES32.glGenTextures(1, velocityMap, 0);
GLES32.glBindTexture(GLES32.GL_TEXTURE_2D, velocityMap[0]); // Bind our texture to target
GLES32.glActiveTexture(GLES32.GL_TEXTURE0); // Use texture unit 0
GLES32.glTexStorage2D(GLES32.GL_TEXTURE_2D, 1, GLES32.GL_RGBA32F, texWidth, texHeight); // Allocate immutable storage
// Set interpolation to nearest
GLES32.glTexParameteri(GLES32.GL_TEXTURE_2D, GLES32.GL_TEXTURE_MAG_FILTER, GLES32.GL_LINEAR);
GLES32.glTexParameteri(GLES32.GL_TEXTURE_2D, GLES32.GL_TEXTURE_MIN_FILTER, GLES32.GL_LINEAR);
///////////////////////////// ADDED THANKS TO SOLIDPIXEL -->
GLES32.glGenFramebuffers(1, fbo, 0); // Generate an FBO
GLES32.glBindFramebuffer(GLES32.GL_DRAW_FRAMEBUFFER, fbo[0]); // Bind it to frame buffer target
GLES32.glFramebufferTexture2D(
GLES32.GL_DRAW_FRAMEBUFFER,
GLES32.GL_COLOR_ATTACHMENT0,
GLES32.GL_TEXTURE_2D,
velocityMap[0], 0); // Attach texture
int colourBufs[] = {GLES32.GL_COLOR_ATTACHMENT0};
GLES32.glDrawBuffers(1, colourBufs, 0); // Specify list of colour buffers to draw to
float[] clearColor = {0.0f, 0.0f, 1.0f, 1.0f}; // Set to blue
GLES32.glClearBufferfv(GLES32.GL_COLOR, 0, clearColor, 0); // Clear buffer
<-- ///////////////////////////// END EDIT
// Get the unit number of image2d variable in shader and bind the immutable texture
texLoc = GLES32.glGetUniformLocation(idComputeShaderProgram, "colourMap");
GLES32.glGetUniformiv(idComputeShaderProgram, texLoc, unit, 0);
GLES32.glBindImageTexture(unit[0], velocityMap[0], 0, false, 0, GLES32.GL_WRITE_ONLY, GLES32.GL_RGBA32F);
In a compute shader I use imageStore() to write data:
#version 320 es
#define S_WORKGROUP_SIZE_X 128
#define S_WORKGROUP_SIZE_Y 1
#define S_WORKGROUP_SIZE_Z 1
layout(binding = 0, rgba32f) writeonly uniform lowp image2D colourMap;
layout(local_size_x = S_WORKGROUP_SIZE_X, local_size_y = S_WORKGROUP_SIZE_Y, local_size_z = S_WORKGROUP_SIZE_Z) in;
void main()
{
imageStore(colourMap, ivec2(gl_GlobalInvocationID.xy), vec4(1.0f, 0.0f, 0.0f, 1.0f));
}
and then once complete I use a separate graphics shader program with a with a uniform 2Dsampler called image to draw the modified texture on a triangle strip. Initialised as:
GLES32.glUseProgram(idGraphicsShaderProgram);
GLES32.glBindTexture(GLES31.GL_TEXTURE_2D, velocityMap[0]);
GLES32.glUniform1i(GLES31.glGetUniformLocation(idGraphicsShaderProgram, "image"), 0); // Use texture unit 0
This is executed in the render method as:
GLES32.glUseProgram(idGraphicsShaderProgram);
GLES32.glBindTexture(GLES31.GL_TEXTURE_2D, velocityMap[0]);
GLES32.glClear(GLES31.GL_COLOR_BUFFER_BIT);
GLES32.glBindVertexArray(vao[0]);
GLES32.glDrawArrays(GLES31.GL_TRIANGLE_STRIP, 0, 4);
I'm currently seeing a black texture which suggests to me that there is not data to show. If I modify my fragment shader to output a flat colour, that works fine so I believe the graphics shader is working properly.
In order to help debug, I would like to initialise the texture I allocate as a filled blue texture to isolate whether the problem is with the graphics shader reading the texture or the compute shader modifying the texture but I don't see how I can buffer data to the texture once I've allocated it with glTexStorage2D() -- I use glTexImage2D() on desktop as there wasn't an immutable storage restriction like there is in GLES and I could buffer data directly. How do I fill an immutable texture with data from the client?
Update:
Having implemented solidpixel's original suggestion I still cannot seen any texture. Is there anything else in this minimal code that is obviously incorrect?
Create a framebuffer object, attach the texture as an attachment, and use "glClear" to set it to a constant color.
Related
I am creating a simple traingle strip to cover the whole viewport with a single rectangle. Then I am applying a 100x100 texture to this surface which changes with every frame.
I set up viewPort and initialise my vertexBuffer etc. in the onSurfaceChanged method of my GLSurfaceView class, then call my rendering function from onDrawFrame.
This setup works as it should, but at random occasions only the right lower quarter of my rectangle gets rendered, the other 3/4th of the canvas is filled with background color! It doesn't happen every time, and the anomaly disappears after rotating the device back and forth (I guess because everything gets a reset in onSurfaceChanged)
I have tried to re-upload all vertices at every frame update with GLES20.glBufferData, which seems to get rid of this bug, although it might be that I just wasn't patient enough to observe it happening (as it is quite unpredictible). It's a very simple triangle strip, so I don't think that it consumes a lot of time, but it just feels bad practice to upload a data 60/sec which isn't changing at all!
//called from onSurfaceChanged
private fun initGL (side:Int) {
/*======== Defining and storing the geometry ===========*/
//vertices for TRIANGLE STRIP
val verticesData = floatArrayOf(
-1.0f,1.0f,//LU
-1.0f,-1.0f,//LL
1.0f,1.0f,//RU
1.0f,-1.0f//RL
)
//float : 32 bit -> 4 bytes
val vertexBuffer : FloatBuffer = ByteBuffer.allocateDirect(verticesData.size * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer()
vertexBuffer.put(verticesData).position(0)
val buffers = IntArray(1)
GLES20.glGenBuffers(1, buffers, 0)
vertexBufferId = buffers[0]
//upload vertices to GPU
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferId)
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER,
vertexBuffer.capacity() * 4, // 4 = bytes per float
vertexBuffer,
GLES20.GL_STATIC_DRAW)
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0)
/*================ Shaders ====================*/
// Vertex shader source code
val vertCode =
"""
attribute vec4 aPosition;
void main(void) {
gl_Position = aPosition;
}
"""
val fragCode =
"""
precision mediump float;
varying vec2 vCoord;
uniform sampler2D u_tex;
void main(void) {
//1-Y, because we need to flip the Y-axis!!
vec4 color = texture2D(u_tex, vec2(gl_FragCoord.x/$side.0, 1.0-(gl_FragCoord.y/$side.0)));
gl_FragColor = color;
}
"""
// Create a shader program object to store
// the combined shader program
val shaderProgram = createProgram(vertCode, fragCode)
// Use the combined shader program object
GLES20.glUseProgram(shaderProgram)
val vertexCoordLocation = GLES20.glGetAttribLocation(shaderProgram, "aPosition")
GLES20.glVertexAttribPointer(vertexCoordLocation, 2, GLES20.GL_FLOAT, false, 0, vertexBuffer)
GLES20.glEnableVertexAttribArray(vertexCoordLocation)
//set ClearColor
GLES20.glClearColor(1f,0.5f,0.5f,0.9f)
//setup a texture buffer array
val texArray = IntArray(1)
GLES20.glGenTextures(1,texArray,0)
textureId = texArray[0]
if (texArray[0]==0) Log.e(TAG, "Error with Texture!")
else Log.e(TAG, "Texture id $textureId created!")
GLES20.glActiveTexture(GLES20.GL_TEXTURE0)
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST)
GLES20.glPixelStorei(GLES20.GL_UNPACK_ALIGNMENT,1)
}
//called from onDrawFrame
private fun updateGLCanvas (matrix : ByteArray, side : Int) {
//create ByteBuffer from updated texture matrix
val textureImageBuffer : ByteBuffer = ByteBuffer.allocateDirect(matrix.size * 1)//Byte = 1 Byte
.order(ByteOrder.nativeOrder())//.asFloatBuffer()
textureImageBuffer.put(matrix).position(0)
//do I need to bind the texture in every frame?? I am desperate XD
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId)
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0,GLES20.GL_RGB, side, side, 0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, textureImageBuffer)
//bind vertex buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferId)
// Clear the color buffer bit
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
//draw from vertex buffer
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP,0,4)
//unbind vertex buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0)
}
There are no error messages and most of the time the code is behaving as it should ... which makes this a bit difficult to track ...
If you want to use a vertex buffer, then the buffer object has to be the currently bound to the target GLES20.GL_ARRAY_BUFFER, when the array of generic vertex attribute data is specified by glVertexAttribPointer. The vertex attribute specification refers to this buffer.
In this case the last parameter of glVertexAttribPointer is treated as a byte offset into the buffer object's data store.
In your case this means the last parameter has to be 0.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferId)
GLES20.glVertexAttribPointer(vertexCoordLocation, 2, GLES20.GL_FLOAT, false, 0, 0)
Note, if no named buffer buffer object is bound (zero), then the last parameter is a pointer to the buffer memory. Every time when a draw call is performed, this buffer is read.
In your implementation, the data which was uploaded to the GPU is never used, because it isn't referenced by the vertex array specification.
See also Vertex Specification.
I'm trying to do some experiments with Open GL ES on Android.
I'm trying to write a shader that got two uniform variables pointing 2 textures.
One containing the current frame, and the other containing the texture drawn on frame before
They're created in java world like this:
texturenames = new int[2];
GLES20.glGenTextures(2, texturenames, 0);
// Bind texture to texturename
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texturenames[0]);
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texturenames[1]);
Then are passed as parameters of the shader like this:
int location = GLES20.glGetUniformLocation (ShaderTools.program, "currentTexture" );
GLES20.glUniform1i ( location, 0 );
location = GLES20.glGetUniformLocation (ShaderTools.program, "prevFrameTexture" );
GLES20.glUniform1i ( location, 1 );
This is the content of the fragment shader:
precision mediump float;
varying vec2 v_TexCoordinate;
uniform sampler2D currentTexture;
uniform sampler2D prevFrameTexture;
main() {
gl_FragColor = (texture2D(currentTexture, v_TexCoordinate) +
texture2D(prevFrameTexture, v_TexCoordinate)) / 2;
}
What i want to achieve is create a sort of blurring effect that's the result of the average of current and previous frame.
Is it possibile to update prevFrameTexture directly into shader code? I didn't find any way to do this.
As alternative... how should i tackle this problem?
Should i copy the content of currentTexture into prevFrameTexture in java world?
I tried to draw alternatively the TEXTURE0 and TEXTURE1 into onDrawFrame but it doesn't work as glActiveTexture to swap from one to another, doesn't work inside that callback
Yes it is possible. Use Render To Texture (RTT)
We can make a FBO as a texture so you should make two FBOs.
An example of making a RTT below
glGenFramebuffers(1, &fbo[object_id]);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[object_id]);
glGenRenderbuffers(1, &rboColor[object_id]);
glBindRenderbuffer(GL_RENDERBUFFER, rboColor[object_id]);
Right after, make a texture following code below
glGenTextures(1, &texture[texture_id].texture_id);
glBindTexture(GL_TEXTURE_2D, texture[texture_id].texture_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture_width, texture_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture[texture_id].texture_id, 0);
Once you have RTT textures, you can update these by rendering each framebuffer
https://github.com/sunglab/StarEngine/blob/master/renderer/StarFBO.cpp
void StarFBO::createFBO(...)
https://github.com/sunglab/StarEngine/blob/master/renderer/StarTexture.cpp
void StarTexture::createTEXTURE_RTT(...)
I've posted my drawing method which is called each frame.
I change the vertices each frame to move the object (which is basically a sprite/textured quad).
As you can see I was initially creating an array each frame, but I have changed this now and I create the array initially and just update it every frame, however I'm wondering if I can do anything more to improve the efficiency? (Although I'm getting about 90fps the sprite does not move smoothly all the time, every now and then it just pauses for a split second). I can't see garbage collector running but I'm guessing it's due to allocation).
As I add more sprites/quads the jerkiness gets worse, but event at 100+ quads, although the smoothness has all but gone, my frame rate is still around 60fps so I can't understand what is slowing this down?
I've also added a screencap from Allocation Tracker
Any help would be appreciated.
public void drawTest(float x, float y, float[] mvpMatrix){
//Convert Co-ordinates
//Left
xPlotLeft = (-MyGLRenderer.ratio)+((x)*MyGLRenderer.coordStepAmountWidth);
//Top
yPlotTop = +1-((y)*MyGLRenderer.coordStepAmountHeight);
//Right
xPlotRight = xPlotLeft+((quadWidth)*MyGLRenderer.coordStepAmountWidth);
//Bottom
yPlotBottom = yPlotTop-((quadHeight)*MyGLRenderer.coordStepAmountHeight);
// Following has been changed as per below. I am now declaring the array initially and just updating it every frame.
// float[] vertices = {
//Top Left
// xPlotLeft,yPlotTop,0, 0,0,
//Top Right
// xPlotRight,yPlotTop,0, 1,0,
//Bottom Left
// xPlotLeft,yPlotBottom,0, 0,1,
//Bottom Right
// xPlotRight,yPlotBottom,0, 1,1
// };
vertices[0]=xPlotLeft;
vertices[1]=yPlotTop;
vertices[2]=0;
vertices[3]=0;
vertices[4]=0;
vertices[5]=xPlotRight;
vertices[6]=yPlotTop;
vertices[7]=0;
vertices[8]=1;
vertices[9]=0;
vertices[10]=xPlotLeft;
vertices[11]=yPlotBottom;
vertices[12]=0;
vertices[13]=0;
vertices[14]=1;
vertices[15]=xPlotRight;
vertices[16]=yPlotBottom;
vertices[17]=0;
vertices[18]=1;
vertices[19]=1;
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
//GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
//Bind texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
//Use program
GLES20.glUseProgram(iProgId);
// Combine the rotation matrix with the projection and camera view
Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0);
//Set starting position for vertices (0 for position)
vertexBuf.position(0);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for position
GLES20.glEnableVertexAttribArray(iPosition);
//Set starting position for vertices (3 for texture)
vertexBuf.position(3);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for texture
GLES20.glEnableVertexAttribArray(iTexCoords);
//Enable Alpha blending and set blending function
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
//Draw
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
//Disable Alpha blending
GLES20.glDisable(GLES20.GL_BLEND);
}
ByteBuffer.allocateDirect() allocates a new buffer in memory every frame, you can create an initial buffer and overwrite the contents instead. Just use rewind() or position(0) before put().
To improve matters further, use a VBO (vertex buffer object, there are many tutorials online, and several questions on SO on this topic) and glBufferSubData to update the buffer.
I am very new to OpenGL, and I am trying to create a 2 pass shader. Basically, it has two frame buffers and two shader programs. It runs the first pass as usual, and then I need to take the resulting texture and pass it as an input to the second shader. How is this done? I cannot seem to see how you take a resulting texture and use it as an input to the next texture?
Here is some code: This code assumes I have setup the second filter program, and some attributes and uniforms in the program correctly
#Override
public void onDraw(final int textureId, final FloatBuffer cubeBuffer,final FloatBuffer textureBuffer){
//this draws the first pass (this is tested and working)
super.onDraw(textureId, cubeBuffer, textureBuffer);
//change the program
GLES20.glUseProgram(secondFilterProgram);
//clear the old colors
GLES20.glClearColor(0, 0, 0, 1);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glActiveTexture(GLES20.GL_TEXTURE3); //change the texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, secondFilterOutputTexture[0]);
GLES20.glUniform1i(secondFilterInputTextureUniform, 3);
cubeBuffer.position(0);
GLES20.glVertexAttribPointer(secondFilterPositionAttribute, 2, GLES20.GL_FLOAT, false, 0, cubeBuffer);
GLES20.glEnableVertexAttribArray(secondFilterPositionAttribute);
textureBuffer.position(0);
GLES20.glVertexAttribPointer(secondFilterTextureCoordinateAttribute, 2, GLES20.GL_FLOAT, false, 0, textureBuffer);
GLES20.glEnableVertexAttribArray(secondFilterTextureCoordinateAttribute); //same as line from init
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glDisableVertexAttribArray(secondFilterPositionAttribute);
GLES20.glDisableVertexAttribArray(secondFilterTextureCoordinateAttribute);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
}
I feel like I am missing a piece of the puzzle here. Again, I am very new to OpenGL, so any help, even conceptually is appreciated
What you want to achieve is called Render to Texture
A small tutorial how to do this with android can be found here:
http://blog.shayanjaved.com/2011/05/13/android-opengl-es-2-0-render-to-texture/
I have a one big Sprite on the Scene - for example 200x200 and in the app i have an array[200][200] in which i store 0 or 1 for each pixel in big sprite.
I want to draw one more textured sprite (for example 10x10) above existing one, but i want to calculate for eache pixel in new sprite if it needs to draw it on this scene depends on provided array (if in corresponding position of the pixel in new sprite in array is '1' - i need to draw this pixel, if '0' - i don't want to draw it (maybe set alpha = 0)).
I think i can use fragment shader for each of new sprites, but i can't understand how to provide array data to the shader to calculate color for each pixel.
I think also can use fragment shader for the whole scene (if render to texture).
I am quite new in opengl and can't figure out in what way to move.
When i create resources for the scene - i try to create my mask:
mask = new float[512*512*4];
for (int i = 0; i < mask.length; i++)
{
mask[i] = 2f;
}
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 1029384756);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 1, GLES20.GL_RGBA, 512, 512, 0, GLES20.GL_RGBA, GLES20.GL_FLOAT, FloatBuffer.wrap(mask));
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
Then when i draw new item on scene i use shader:
setShaderProgram(ShaderProgram.getInstance());
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glUniform1i(RadialBlurExample.RadialBlurShaderProgram.sUniformMask, GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 1029384756);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
But i can't see new item on scene (maskVal is <0.5).
I try to find working way to pass array as a texture but i can't find it.
Upload your array as a second texture with the same dimensions as the sprite, and then when you draw the sprite, sample the second texture at the same texcoord.
If the second texture doesn't meet the mask criteria, discard the fragment
uniform sampler2d sprite;
uniform sampler2d mask;
in vec2 uv;
main() {
float maskVal = texture2D(mask, uv).r;
if(maskVal > 0.5) {
gl_FragColor = texture2D(sprite,uv);
} else {
discard;
}
}