Android OpenGL ES2 Texture gets swapped on desire s - android

I am making a game which I develop on the Nexus 4, it runs fine and no problems. I tested it on Sony's Xperia U and (for this problem) it runs fine as well.
The problem however is on the desire S (with the same code)
When I start the actual gameplay I have this background texture that should be rendered but for a second or so it takes the wrong texture. See the effect:
Image of Issue swap:
The left picture is what goes wrong, the right picture is how it should be (not exactly the same time but it is for the visualisation of my problem).
It takes the texture that is assigned to GLES20.GL_TEXTURE4 where it should have taken GLES20.GL_TEXTURE0.
Before I go to the code, something same happens during gameplay, whenever particles are rendered my text (also textures) are swapped with different textures. The moment my particles are gone, the text is rendered correctly again.
Image of Issue swap:
In this example it seems to take the texture that holds my background but it is random, because i do remember seeing only characters as well but then with black backgrounds and no transparency. The particles and the text engine use the same ShaderProgram, the background however does not, but for completeness, my shader:
public static final String vertexShaderCodeTC =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"attribute vec4 a_Color;" +
"attribute vec2 a_texCoord;" +
"varying vec4 v_Color;" +
"varying vec2 v_texCoord;" +
"void main() {" +
" gl_Position = uMVPMatrix * vPosition;" +
" v_texCoord = a_texCoord;" +
" v_Color = a_Color;" +
"}";
public static final String fragmentShaderCodeTC =
"precision mediump float;" +
"varying vec4 v_Color;" +
"varying vec2 v_texCoord;" +
"uniform sampler2D s_texture;" +
"void main() {" +
" gl_FragColor = texture2D( s_texture, v_texCoord ) * v_Color;" +
" gl_FragColor.rgb *= v_Color.a;" +
"}";
Now here is my rendering code:
int mPositionHandle = GLES20.glGetAttribLocation(riGraphicTools.ShaderProgram, "vPosition");
riGlobal.checkGlError("glGetAttribLocation | vposition |");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
riGlobal.checkGlError("glEnableVertexAttribArray");
// Prepare the background coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, 3,
GLES20.GL_FLOAT, false,
0, vertexBuffer);
riGlobal.checkGlError("glVertexAttribPointer");
int mTexCoordLoc = GLES20.glGetAttribLocation(riGraphicTools.ShaderProgram, "a_texCoord" );
riGlobal.checkGlError("glGetAttribLocation | a_texCoord");
// Prepare the texturecoordinates
GLES20.glVertexAttribPointer ( mTexCoordLoc, 2, GLES20.GL_FLOAT,
false,
0, textureBuffer);
riGlobal.checkGlError("glVertexAttribPointer");
GLES20.glEnableVertexAttribArray ( mPositionHandle );
riGlobal.checkGlError("glEnableVertexAttribArray");
GLES20.glEnableVertexAttribArray ( mTexCoordLoc );
riGlobal.checkGlError("glEnableVertexAttribArray");
// get handle to shape's transformation matrix
int mtrxhandle = GLES20.glGetUniformLocation(riGraphicTools.ShaderProgram, "uMVPMatrix");
riGlobal.checkGlError("glGetUniformLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mtrxhandle, 1, false, m, 0);
riGlobal.checkGlError("glUniformMatrix4fv");
int mSamplerLoc = GLES20.glGetUniformLocation (riGraphicTools.ShaderProgram, "s_texture" );
riGlobal.checkGlError("glGetUniformLocation");
// Set the sampler texture unit to 0
GLES20.glUniform1i ( mSamplerLoc, riGlobal.get().getThemeStateOne());
riGlobal.checkGlError("glUniform1i");
// Draw the triangle
GLES20.glDrawElements(GLES20.GL_TRIANGLES, GameBackground.indices.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
riGlobal.checkGlError("glDrawElements");
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
riGlobal.checkGlError("glDisableVertexAttribArray");
GLES20.glDisableVertexAttribArray(mTexCoordLoc);
riGlobal.checkGlError("glDisableVertexAttribArray");
When I debug, my themestate function return the correct texture value so it should render my background (as it does on all other devices) but somehow it does not on the Desire S.
This is probably not all the code you need but I do not know exactly what you need and posting my whole app seems somewhat overkill, so please ask and I will explain etc.
I have tested this on multiple devices as stated so it really seems a Desire S only, or some config/setting that just affects the Desire S but I can't seem to find out what, does anybody has any idea or suggestion how to get it rendering correctly again?
P.S. The logic i see is that when I render my red blocks (effect) at start (and end for that matter) it swaps the background texture somehow, as soon as I do not render any triangles with that red block texture anymore, the background renders correctly again. The same i see with particles, rendering them messes up the text. The strange thing here though is that the particle texture is on the same texture as my background, so during gameplay without particles it already draws from that texture.
Also, it only happens on those two places/instances in all parts of my game/menus/parts/etc.

Fixed this problem, it was a bug in the driver software of the GPU, the textureunit id was only taking once, more info: http://androidblog.reindustries.com/hack-bad-gpu-fix-not-using-correct-texture-opengl/

Related

Simple Shader with OpenGL 'Mix' function does not work

I have a simple shader code.
I pass in two image texture (NOTE: One is samplerExternalOES and other one is sampler2D).
The first texture 'sTexture' is the original image i get from a camera frame.
The second texture 'sTexture2' is a mask i get from the cpu.
The shader is as follows:
uniform samplerExternalOES sTexture;
uniform sampler2D sTexture2;
varying vec2 v_TexCoord;
void main(void)
{
vec4 originalrgb = vec4((texture2D(sTexture, v_TexCoord).rgb), 1.0);
vec4 floodfillimage = vec4((texture2D(sTexture2, v_TexCoord).rgb), 1.0);
/*Code To Colour Input Image with Blue Tint Color*/
vec4 c = vec4(0.0, 1.0, 1.0, 1);
vec4 inputColor = vec4((texture2D(sTexture, v_TexCoord).rgb), 1.0);
float average = (inputColor.r + inputColor.g + inputColor.b) / 3.0;
vec4 grayscale = vec4(average, average, average, 1.0);
vec4 colorizedOutput = grayscale * c ;
/*Code To mix original image with blue coloured based on another floodfilledimage passed in */
gl_FragColor = mix(originalrgb.rgba, colorizedOutput.rgba, floodfillimage.r);
}
The error i recieve is glerror 1282, which means GL_INVALID_OPERATION. I've debugged and found out this happens on the mix function line.
NOTE:
If I change the last line to gl_FragColor = mix(originalrgb.rgba, colorizedOutput.rgba, 0.5);, it works.
So, why is it that the code panics when i do floodfillimage.r?
Thank You.
EDIT
I've tested both textures passed in (ie, i just rendered them to gl_FragColor) and they both are the image they're suppose to be
The shader looks legal to me and compiles successfully with various offline compilers, so I suspect you have hit a driver bug on the device you are using.

index greater than GL_MAX_VERTEX_ATTRIBS (while it's actually lower than the HW limit)

I'm having a problem running my app on my phone, while it runs fine on the emulator. I'm using OPENGLES 2.0 for Android and the problem seems to be in OpenGL. The following errors are basically repeated every time I draw a frame:
gles_state_set_error_internal:63: GLES error info:<program> could not be made part of current state. <program> is not linked GLES a_norm-1
gles_state_set_error_internal:62: GLES ctx: 0x7fa2596008, error code:0x502
gles_state_set_error_internal:63: GLES error info:<index> is greater than or equal to GL_MAX_VERTEX_ATTRIBS
My phone is an Allview P5 Energy running Android 5.1, kernel 3.10.65+
The emulator that runs my code well is a Google Nexus 4, 4.2.2. API 17.
As the error code suggests I may be trying to write too many attributes per vertex, so I checked with the following code snippet the maximum amount of attributes supported by my (emulated) hardware:
IntBuffer max = IntBuffer.allocate(1);
GLES20.glGetIntegerv(GLES20.GL_MAX_VERTEX_ATTRIBS,max);
System.err.println("GLES MAX IS"+max.get(0));
For the emulator this gives 29, and for my real phone 16. Ok 16 is less but as you can see in my shader, I only use 3 attributes, and 3<16... The shaders are as follows:
public static final String vertexLightTexture = "uniform mat4 u_MVPMatrix;"+ // A constant representing the combined model/view/projection matrix.
"uniform mat4 u_MVMatrix;"+ // A constant representing the combined model/view matrix.
"attribute vec4 a_Position;"+ // Per-vertex position information we will pass in.
"attribute vec3 a_Normal;"+ // Per-vertex normal information we will pass in.
"attribute vec2 a_TexCoordinate;"+ // Per-vertex texture coordinate information we will pass in.
"varying vec3 v_Position;"+ // This will be passed into the fragment shader.
"varying vec3 v_Normal;"+ // This will be passed into the fragment shader.
"varying vec2 v_TexCoordinate;"+ // This will be passed into the fragment shader.
"void main()"+
"{"+
"v_Position = vec3(u_MVMatrix * a_Position);"+ // Transform the vertex into eye space.
"v_TexCoordinate = a_TexCoordinate;"+ // Pass through the texture coordinate.
"v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));"+ // Transform the normal's orientation into eye space.
"gl_Position = u_MVPMatrix * a_Position;"+ // regular position
"}";
public static final String fragmentLightTexture = "precision mediump float;"+
"uniform vec3 u_LightPos;"+ // The position of the light in eye space.
"uniform sampler2D s_Texture;"+ // The input texture.
"varying vec3 v_Position;"+ // Interpolated position for this fragment
"varying vec3 v_Normal;"+ // Interpolated normal for this fragment
"varying vec2 v_TexCoordinate;"+// Interpolated texture coordinate per fragment.\n" +
"void main()"+
"{"+
"float distance = length(u_LightPos - v_Position);"+ // Will be used for attenuation
"vec3 lightVector = normalize(u_LightPos - v_Position);" +// Get a lighting direction vector from the light to the vertex
"float diffuse = max(dot(v_Normal, lightVector), 0.0);" + // dot product for the illumination angle.
"diffuse = diffuse * (1.5 / (1.0 + (distance/8)));\n" + // Attenuation.
"diffuse = diffuse + 0.2;" + // ambient light
"gl_FragColor = (diffuse * texture2D(s_Texture, v_TexCoordinate));" +// Multiply the color by the diffuse illumination level and texture value to get final output color.
"}";
The code I use to pass data into the shader is the following (first for initialization):
int vertexShader = MyGLRenderer.loadShader(GLES20.GL_VERTEX_SHADER, Shaders.vertexLightTexture);
int fragmentShader = MyGLRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,Shaders.fragmentLightTexture);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
In the init method I also put the vertex, normal and texture coordinates through buffers in the GPU memory. Then for each frame I execute this code and basically for each data element I try to write I get the aforementioned errors. This is the draw method called for each frame:
public void draw(float[] mvpMatrix, float[] mvMatrix, float[] lightPosition){
// group all wall elements at once
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// Get handle to shape's transformation matrix]
int mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "u_MVPMatrix");
System.err.println("GLES MVPMatrix" + mMVPMatrixHandle);
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Get handle to shape's transformation matrix]
int mMVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "u_MVMatrix");
System.err.println("GLES MVMatrix" + mMVMatrixHandle);
// Apply the view transformation
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mvMatrix, 0);
// Get handle to textures locations
int mSamplerLoc = GLES20.glGetUniformLocation(mProgram, "s_Texture");
System.err.println("GLES s_tex" + mSamplerLoc);
// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set the sampler texture unit to 0, where we have saved the texture.
GLES20.glUniform1i(mSamplerLoc, 0);
// get handle to vertex shader's vPosition member
int mPositionHandle = GLES20.glGetAttribLocation(mProgram, "a_Position");
System.err.println("GLES a_pos" + mPositionHandle);
// Use the buffered data from GPU memory
int vertexStride = COORDINATES_PER_VERTEX * 4;
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, positionBufferIndex);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, vertexStride, 0);
// get handle to vertex shader's vPosition member
int mNormalHandle = GLES20.glGetAttribLocation(mProgram, "a_Normal");
System.err.println("GLES a_norm" + mNormalHandle);
// Use the buffered data from GPU memory
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, normalBufferIndex);
GLES20.glEnableVertexAttribArray(mNormalHandle);
GLES20.glVertexAttribPointer(mNormalHandle, 3, GLES20.GL_FLOAT, false, vertexStride, 0);
// Get handle to texture coordinates location
int mTextureCoordinateHandle = GLES20.glGetAttribLocation(mProgram, "a_TexCoordinate" );
System.err.println("GLES a_tex" + mTextureCoordinateHandle);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, uvBufferIndex);
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
GLES20.glVertexAttribPointer(mTextureCoordinateHandle, 2, GLES20.GL_FLOAT, false, 0, 0);
// Clear the currently bound buffer (so future OpenGL calls do not use this buffer).
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
// Pass in the light position in eye space.
int mLightPosHandle = GLES20.glGetUniformLocation(mProgram, "u_LightPos");
System.err.println("GLES u_light" + mLightPosHandle);
GLES20.glUniform3f(mLightPosHandle, lightPosition[0], lightPosition[1], lightPosition[2]);
// Draw the Square
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexBufferIndex);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,numberOfIndices,GLES20.GL_UNSIGNED_SHORT,0);
// Clear the currently bound buffer (so future OpenGL calls do not use this buffer).
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mNormalHandle);
GLES20.glDisableVertexAttribArray(mTextureCoordinateHandle);
}
Any suggestions would be really helpful, I've tried a lot already, like using a simple cube model instead of my high-poly figure I'm trying to draw, but nothing worked so far to get rid of the errors on the phone. On the emulator any model is drawn well...
The problem was in the fragment shader. Somehow the hardware in the phone couldn't handle dividing by 8, it had to be 8.0, so it would know it's a float.
public static final String fragmentLightTexture = "precision mediump float;"+
"uniform vec3 u_LightPos;"+ // The position of the light in eye space.
"uniform sampler2D s_Texture;"+ // The input texture.
"varying vec3 v_Position;"+ // Interpolated position for this fragment
"varying vec3 v_Normal;"+ // Interpolated normal for this fragment
"varying vec2 v_TexCoordinate;"+// Interpolated texture coordinate per fragment.\n" +
"void main()"+
"{"+
"float distance = length(u_LightPos - v_Position);"+ // Will be used for attenuation
"vec3 lightVector = normalize(u_LightPos - v_Position);" +// Get a lighting direction vector from the light to the vertex
"float diffuse = max(dot(v_Normal, lightVector), 0.0);" + // dot product for the illumination angle.
"diffuse = diffuse * (1.5 / (1.0 + (distance/8.0)));\n" + // Attenuation.
"diffuse = diffuse + 0.2;" + // ambient light
"gl_FragColor = (diffuse * texture2D(s_Texture, v_TexCoordinate));" +// Multiply the color by the diffuse illumination level and texture value to get final output color.
"}";
I have to say the error message of GL_MAX_VERTEX_ATTRIBS wasn't helpful at all and really threw me off to search in the wrong direction.

Bilinear filter on Android using OpenGL ES 2.0

Here is my code.It runs well on PC/Windows,but jagged on Android 4.42 when I magnify the image.
#ifdef GL_ES
precision highp float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform float u_width; //width of image
uniform float u_height; //height of image
void main()
{
float texelSizeX = 1.0/u_width;
float texelSizeY = 1.0/u_height;
//four pixels' color
vec4 p0q0 = texture2D(CC_Texture0, v_texCoord);
vec4 p1q0 = texture2D(CC_Texture0, v_texCoord + vec2(texelSizeX, 0));
vec4 p0q1 = texture2D(CC_Texture0, v_texCoord + vec2(0, texelSizeY));
vec4 p1q1 = texture2D(CC_Texture0, v_texCoord + vec2(texelSizeX , texelSizeY));
//bilinear interpolation
float a = fract(v_texCoord.s * u_width);
float b = fract(v_texCoord.t * u_height);
vec4 color_q0 = mix( p0q0, p1q0, a );
vec4 color_q1 = mix( p0q1, p1q1, a );
vec4 color = mix( color_q0, color_q1, b);
gl_FragColor = v_fragmentColor * color;
}
I'm sorry that I cannot post pictures. I debug the code well with VS2012, and the image seems smooth.
But when I run the program on Android, the image is full of jag. I don't know why.
Obvious question: why are you doing bilinear filtering in your shader and not just using the built-in hardware bilinear filtering? I'm sure there's a good reason, but telling us that might help you avoid a lot of questions along the lines of "have you set your filtering mode appropriately?"
That being said, it's likely to be a precision problem. You probably want to round v_texCoord to the exact sampling site as I'd guess that you have GL_NEAREST filtering set, to disable the hardware bilinear filtering, but due to precision problems e.g. v_texCoord + vec2(texelSizeX, 0) is then sampling the same texel rather than the next one along when v_texCoord is close to 0, or possibly the sample taken at v_texCoord is the next texel along when it's close to 1, or something along those lines.
OpenGL considers the centre of a texel to be its location. So if you were in 1d you could do something like:
r_texCoord.x = v_texCoord.x - mod(v_texCoord.x, 1.0/u_width) + 0.5/u_width;
Or if you were happy to use integral texture coordinates rather than the normal OpenGL [0.0, 1.0) range then you could simplify slightly because floor (and indeed ceil) already knows how to move you to an integral boundary:
(floor(v_texCoord.x) + 0.5) / u_width
... both of which are dependent reads so performance will suffer quite a bit.

libgdx - shader ignores blending/alpha in first frame

I'm rendering a libgdx mesh using shader with alpha color less than 1.0.
When rendering the first frame, the alpha value set in shader is being ignored and rendered as 1.0. All following rendering frames are fine.
The same happened to me in previous project drawing lines and shapes using glDrawArrays and I haven't found the solution yet.
Code in libgdx render() loop:
Gdx.gl20.glClearColor(0, 0, 0, 1);
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
Gdx.gl20.glEnable(GL20.GL_BLEND);
MeshManager_colorFill.meshShader.begin();
MeshManager_colorFill.meshShader.setUniformMatrix("u_worldView", Snappr.camMatrix);
meshArray_colorFill.render(MeshManager_colorFill.meshShader, GL20.GL_TRIANGLE_FAN, 0, meshArray_colorFill.get(i).getNumVertices());
MeshManager_colorFill.meshShader.end();
Gdx.gl20.glDisable(GL20.GL_BLEND);
My shader (compiled in create(){}):
public static final String meshVertexShader_colorFill =
"attribute vec2 a_position; \n" +
"uniform mat4 u_worldView;\n" +
"void main() \n" +
"{ \n" +
" gl_Position = u_worldView * vec4(a_position.xy, 0, 1); \n" +
"} \n" ;
public static final String meshFragmentShader_colorFill =
"precision mediump float;\n" +
"void main() \n" +
"{ \n" +
" gl_FragColor = vec4(1,1,1,0.2);\n" +
"}";
How do I render the very first frame as it should be?
Thanks
glBlendFunc in create() does the trick, specifically: "Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, L20.GL_ONE_MINUS_SRC_ALPHA);" I'm using SpriteBatch to render a font, right after the mesh. glBlendFunc is contained internally within the SpriteBatch, looks like that's why all the other frames were fine.
I found that ModelBatch.begin(...) will disable Blending by calling Gdx.gl.glDisable(GL20.GL_BLEND). So make sure to enable blending AFTER calling ModelBatch.begin().
modelBatch.begin(camera); // resets Blending to false
// enable Blending
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
// draw mesh
mesh.render(shader, GL20.GL_TRIANGLES);
modelBatch.end();
Source https://stackoverflow.com/a/66820414/2413469

Android ImageView Matrix to Texture Matrix

It has been quite a while since understanding Matrix calculations and mapping them to expected results. It's sort of intimidating. Curious if there is anyone out there who have done mapping from an ImageView Matrix android.graphics.Matrix to an openGL Matrix 'android.opengl.Matrix' in order to update the texture based upon the image behind it.
There will always be a photograph directly behind the texture, and the texture in front should stay at the same scale and translation as the ImageView (actually using the ImageTouchView library). The ImageView scaling and zooming works as expected and desired.
So, while the Image is being re-sized, I want to update the OpenGL ES 2.0 shader code in the project such that it changes size with it. I thought that I would be able to do this by using callbacks and directly manipulating the shader code, but it doesn't seem to be working. Not sure if this is a shader problem or a direct Matrix passing problem. The trigger is reached from the ImageTouchView when a matrix change is detected.
#Override
public void onMatrixChanged(Matrix matrix)
{
photoViewRenderer_.updateTextureMatrix(matrix);
photoSurfaceView_.requestRender();
}
When I receive that matrix data and attempt to display the new matrix in the texture then, I get a black screen overtop of the ImageTouchView. So, this could very well just be an OpenGL problem in the code. However, this is what it looks like so as we can see exactly what is being described.
The shader code looks something like this. And I'll start with these and add additional code as requested based upon feedback.
private final String vertexShader_ =
"uniform float zoom;\n" +
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"attribute vec3 a_translation;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position * vec4(a_translation, 1.0);\n" +
" v_texCoord = a_texCoord.xy * zoom;\n" +
"}\n";
Before adding in the line for vec4(a_translation, 1.0); it seemed be working as expected in that the image was being displayed directly on top of the ImageTouchView of equal size just fine. So it is probably the shader. But...I cannot rule out that the data going in from an Image Matrix is screwing up the texture either and representing it way off screen. I don't know what to use as a default matrix for a_translation in to check against that.
Edit:
The black screen is actually not an issue now. The default a_position set to private float[] positionTranslationData_ = {1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f}; brought the image back. But the texture is not getting manipulated with the inputs from the onMatrixChanged(Matrix matrix) function called in by the ImageTouchView.
If you want to translate the image (By moving the pixels renderer) you have two options.
Translate the gl_Position:
private final String vertexShader_ =
"uniform float zoom;\n" +
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"attribute vec3 a_translation;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position + vec4(a_translation, 1.0);\n" + // Note the + here
" v_texCoord = a_texCoord.xy * zoom;\n" +
"}\n";
Apply an affine transform, using a 4x4 matrix, to gl_Position (be aware that the 4th component of a_position must be 1.0):
private final String vertexShader_ =
"uniform float zoom;\n" +
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"attribute mat4 a_translation;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position*a_translation;\n" +
" v_texCoord = a_texCoord.xy * zoom;\n" +
"}\n";
if you want to move the texture (not the quad or whatever you are rendering) you can apply the same logic to the v_texCoord. This will apply a translation and/or rotation to the output texture coordinates.
Probably one reason for you problem is that right now, your code that a multiply component wise, which will multiply a_position.x by a_translation.x component, the same for 'y' and so on, which I think is not what you are trying to archive.

Categories

Resources