Android Invert a Bitmap at Runtime - android

I am trying to invert a bitmap by using a Paint ColorFilter
I used this link as a reference:
http://www.mail-archive.com/android-developers#googlegroups.com/msg47520.html
but it has absolutely no effect - bitmap is drawn normally can you tell me what I'm doing incorrectly?
Define float array:
float invert [] = {
-1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
0.0f, -1.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f, 0.0f
};
Setup Paint in constructor
ColorMatrix cm = new ColorMatrix(invert);
invertPaint.setColorFilter(new ColorMatrixColorFilter(cm));
Reference in Draw() method
c.drawBitmap(Bitmap, null, Screen, invertPaint);
EDIT: I was able to get it to work by having the paint assignment in the draw statement:
ColorMatrix cm = new ColorMatrix(invert);
invertPaint.setColorFilter(new ColorMatrixColorFilter(cm));
c.drawBitmap(rm.getBitmap(DefaultKey), null, Screen, invertPaint);
but now it renders really slow (probably because its setting up a complicated matrix every single frame) ...is there a reason it works when it's in the same method?
EDIT2:
NEVERMIND!!! Lol, the issue was that I had two constructors and I was only configuring the colorfilter in one of them...the proccess is still very CPU intensive and causes framerate issues

This is an old thread.
However: The Matrix is not good for inversion of anti-aliased images with transparency.
Should be:
float invert[] =
{
-1.0f, 0.0f, 0.0f, 1.0f, 1.0f,
0.0f, -1.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, -1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f, 1.0f, 0.0f
};

Related

How to create multiple camera2 previews in most efficient way?

I am trying to create 4 streams of my camera preview on my activity. I created a TextureView which is registered to the camera2 API for the feed and then I set up a listener on the SurfaceView in order to listen to changes for the feed and update the other 3 previews (ImageViews) accordingly. You can see in my code below:
private final TextureView.SurfaceTextureListener mSurfaceTextureListener
= new TextureView.SurfaceTextureListener() {
#Override
public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) {
cameraHandler.openCamera(width, height);
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture texture) {
return true;
}
#Override
public void onSurfaceTextureUpdated(SurfaceTexture texture) {
for (ImageView mSecondaryPreview : mSecondaryPreviews) {
Bitmap frame = Bitmap.createBitmap(mTextureView.getWidth(), mTextureView.getHeight(), Bitmap.Config.ARGB_8888);
mTextureView.getBitmap(frame);
mSecondaryPreview.setImageBitmap(frame);
}
}
};
As you can see this has to read from the TextureView for every frame, extract the bitmap and then set the bitmap of the other 3 ImageView streams. I tried to do this on the UI thread initially which is very slow and then tried to submit it to the background handler which was better frame rate but caused a lot of issues with the app crashing due to the load.
Thanks
EDIT
So in order to crop the bottom 0.4375 of the preview, I changed ttmp to
float[] ttmp = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.4375f,
0.0f, 0.4375f,
1.0f, 1.0f,
1.0f, 0.4375f,
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.4375f,
0.0f, 0.4375f,
1.0f, 1.0f,
1.0f, 0.4375f,
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.4375f,
0.0f, 0.4375f,
1.0f, 1.0f,
1.0f, 0.4375f,
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.4375f,
0.0f, 0.4375f,
1.0f, 1.0f,
1.0f, 0.4375f
};
but this did not crop as expected
So if you can get GLSurfaceView working with the camera preview as in this question then you can make 3 more copies simply by adding 6 more polygons. Let me explain how they are layed out. vtmp and ttmp describe the vector coordinates and texture coordinates, respectively, of two triangles, in a GL_TRIANGLE_STRIP form:
float[] vtmp = {
1.0f, 1.0f, //Top right of screen
1.0f, -1.0f, //Bottom right of screen
-1.0f, 1.0f, //Top left of screen
-1.0f, -1.0f //Bottom left of screen
};
float[] ttmp = {
1.0f, 1.0f, //Top right of camera surface
0.0f, 1.0f, //Top left of camera surface
1.0f, 0.0f, //Bottom right of camera surface
0.0f, 0.0f //Bottom left of camera surface
};
Step 1: Let's change the primitive type to a GL_TRIANGLES, because this is going to be difficult with a GL_TRIANGLE_STRIP:
//This also includes a fix for the left/right inversion:
float[] vtmp = {
//Triangle 1:
-1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
//Triangle 2:
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f
};
float[] ttmp = {
//Triangle 1:
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
//Triangle 2:
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
pVertex = ByteBuffer.allocateDirect(12*4).order(ByteOrder.nativeOrder()).asFloatBuffer();
(...)
pTexCoord = ByteBuffer.allocateDirect(12*4).order(ByteOrder.nativeOrder()).asFloatBuffer();
(...)
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6); //Careful: Multiple changes on this line
(...)
If Step 1 went well, then let's add the extra triangles in Step 2:
float[] vtmp = {
//Triangle 1:
-1.0f, 0.0f,
-1.0f, 1.0f,
0.0f, 0.0f,
//Triangle 2:
0.0f, 0.0f,
-1.0f, 1.0f,
0.0f, 1.0f,
//Triangle 3:
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
//Triangle 4:
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
//Triangle 5:
0.0f, -1.0f,
0.0f, 0.0f,
1.0f, -1.0f,
//Triangle 6:
1.0f, -1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
//Triangle 7:
-1.0f, -1.0f,
-1.0f, 0.0f,
0.0f, -1.0f,
//Triangle 8:
0.0f, -1.0f,
-1.0f, 0.0f,
0.0f, 0.0f
};
float[] ttmp = {
//This is the same as in Step 1, but duplicated 4 times over:
//Triangle 1:
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
//Triangle 2:
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
//Triangle 3:
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
//Triangle 4:
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
//Triangle 5:
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
//Triangle 6:
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
//Triangle 7:
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
//Triangle 8:
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
pVertex = ByteBuffer.allocateDirect(48*4).order(ByteOrder.nativeOrder()).asFloatBuffer();
(...)
pTexCoord = ByteBuffer.allocateDirect(48*4).order(ByteOrder.nativeOrder()).asFloatBuffer();
(...)
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 24);
(...)

Tensorflow error on Android

I am learning Tensorflow and following a tutorial I was able to make a custom model to run it in an Android App but I am having problems with it. I have the following code:
public void testModel(Context ctx) {
String model_file = "file:///android_asset/model_graph.pb";
int[] result = new int[2];
float[] input = new float[]{0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 0.0F, 0.0F, 1.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 1.0F, 0.0F, 0.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 1.0F, 0.0F};
TensorFlowInferenceInterface inferenceInterface;
inferenceInterface = new TensorFlowInferenceInterface(ctx.getAssets(), model_file);
inferenceInterface.feed("input", input, 68);
inferenceInterface.run(new String[]{"output"});
inferenceInterface.fetch("output", result);
Log.v(TAG, Arrays.toString(result));
}
I got an error when the app try to run the inferenceInterface.run(new String[]{"output"}) method:
java.lang.IllegalArgumentException: In[0] is not a matrix
[[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_0_0, W1)]]
I don't believe the model I created is the problem because I was able to use it in a Python code with positive result.
From the error message (In[0] is not a matrix), it appears that your model requires the input to be a matrix (i.e., a two dimensional tensor), while you are feeding a one dimensional tensor (vector) with 68 elements.
In particular, the dims argument to TensorFlowInferenceInterface.feed seems incorrect in the line:
inferenceInterface.feed("input", input, 68);
Instead, it should be something like:
inferenceInterface.feed("input", input, 68, 1);
if your model expects a 68x1 matrix (or 34, 2 if it expects a 34x2 matrix, 17, 4 for a 17x4 matrix etc.)
Hope that helps.

OpenGL ES 2.0 - Textures always black

I've been reading a lot about textures in Open GL ES 2.0 today. My problem is, that they are all black.
My code:
To generate a texture from a bitmap:
private void generateTexture(Bitmap bmp) {
final int[] textureHandle = new int[1];
Log.d(TAG, "Generating texture handle");
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0) {
Log.d(TAG, "binding texture to " + textureHandle[0]);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
Log.d(TAG, "GLError#bindTex=" + GLES20.glGetError());
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
Log.d(TAG, "Loading bitmap into texture");
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
Log.d(TAG, "GLError#texImg2D=" + GLES20.glGetError());
Log.d(TAG, "Recycle bitmap");
// Recycle the bitmap, since its data has been loaded into OpenGL.
bmp.recycle();
}
There are no errors in my logcat, everything seems to be like it's supposed to.
How I use the texture:
if (mShader instanceof Texture2DShader && mTextureBuffer != null) {
// activate texture
Log.d(TAG, "Passing texture stuff");
mTextureBuffer.position(0);
Log.d(TAG, "Activate Texture");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
Log.d(TAG, "Binding texture -> " + mTexture.getHandle());
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTexture.getHandle());
if (mShader.getGLLocation(BaseShader.U_TEXTURE) != -1) {
Log.d(TAG, "Passing u_Texture");
GLES20.glUniform1i(mShader.getGLLocation(BaseShader.U_TEXTURE), 0);
}
if (mShader.getGLLocation(BaseShader.A_TEXCOORDINATE) != -1) {
Log.d(TAG, "Passing a_TexCoordinate");
GLES20.glVertexAttribPointer(mShader.getGLLocation(BaseShader.A_TEXCOORDINATE), 2, GLES20.GL_FLOAT, false, 0, mTextureBuffer);
GLES20.glEnableVertexAttribArray(mShader.getGLLocation(BaseShader.A_TEXCOORDINATE));
}
Log.d(TAG, "Texture stuff passed.");
Log.d(TAG, "Error = " + GLES20.glGetError());
}
Logcat says something like this:
D/TextureCube﹕ Activate Texture
D/TextureCube﹕ Binding texture -> 3
D/TextureCube﹕ Passing u_Texture
D/TextureCube﹕ Passing a_TexCoordinate
D/TextureCube﹕ Texture stuff passed.
D/TextureCube﹕ Error = 0
So no error, seems to be working?
My shaders:
Fragment shader:
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform vec4 u_Light;
uniform vec4 u_Ambient;
uniform vec3 u_LightDirection;
uniform vec3 u_CameraPos;
uniform sampler2D u_Texture;
//varying vec4 v_Ambient; // Ambient light factor!
varying vec2 v_TexCoordinate;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec4 v_Color; // This is the color from the vertex shader interpolated across the triangle per fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
//varying vec3 v_CameraPosition;
// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.1);
// Add attenuation. (used to be 0.25)
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
// calculate specular light!
//vec3 lightDirection = -u_LightDirection;
//vec3 vertexToEye = normalize(u_CameraPos - v_CameraPos);
//vec3 lightReflect = normalize(reflect(u_LightDirection, v_Normal));
//float specularFactor = dot(vertexToEye, lightReflect);
// Multiply the color by the diffuse illumination level to get final output color.
//
gl_FragColor = v_Color * (u_Ambient + (diffuse * u_Light) * texture2D(u_Texture, v_TexCoordinate));
}
Vertex Shader:
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec4 a_Color; // Per-vertex color information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate;
varying vec2 v_TexCoordinate;
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec4 v_Color; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
//varying vec3 v_CameraPosition;
//varying vec4 v_Ambient; // Pass the ambient color to the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
v_TexCoordinate = a_TexCoordinate;
// Pass through the color.
v_Color = a_Color;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
//v_CameraPos = vec3(u_MVMatrix * vec4(u_CameraPos, 0.0));
// v_CameraPosition = u_CameraPos;
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
Seems like texture2D returns a zero vector in the fragment shader, since if I simply write gl_FragColor = vec4(0.5,0.5,0.5,1.0) + texture2D(..) it is drawn.
I've already looked at countless questions here on SO as well as other websites, I know this exact question has been asked a couple of times, but no matter what I've tried - it did not help.
I've already downscaled my texture to 512x512, then 256x256, and even lower up to 64x64 but no changes. I've printed out my texture handlers, checked for GL errors, etc. but nothing.
EDIT:
At first I've been trying to load the texture from R.raw, then moved it to R.drawable, but no change.
EDIT 2: Cube vertices/normals/texture/color declaration:
private final float[] mCubePosition = {
// Front face
-1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
// Right face
1.0f, 1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, -1.0f,
1.0f, -1.0f, 1.0f,
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
// Back face
1.0f, 1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
-1.0f, 1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
-1.0f, -1.0f, -1.0f,
-1.0f, 1.0f, -1.0f,
// Left face
-1.0f, 1.0f, -1.0f,
-1.0f, -1.0f, -1.0f,
-1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, -1.0f,
-1.0f, -1.0f, 1.0f,
-1.0f, 1.0f, 1.0f,
// Top face
-1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, 1.0f,
1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
1.0f, 1.0f, -1.0f,
// Bottom face
1.0f, -1.0f, -1.0f,
1.0f, -1.0f, 1.0f,
-1.0f, -1.0f, -1.0f,
1.0f, -1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
-1.0f, -1.0f, -1.0f,
};
// R, G, B, A
private final float[] mCubeColors =
{
// Front face (red)
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
// Right face (green)
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
// Back face (blue)
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
// Left face (yellow)
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
// Top face (cyan)
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
// Bottom face (magenta)
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f
};
private final float[] mCubeNormals =
{
// Front face
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
// Right face
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
// Back face
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f,
// Left face
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
// Top face
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
// Bottom face
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f
};
private final float[] mCubeTexture =
{
// Front face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
// Right face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
// Back face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
// Left face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
// Top face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
// Bottom face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
EDIT 3: Texture Coordinates as Colors to see if they are being transmitted:
gl_FragColor = vec4(v_TexCoordinate.x, v_TexCoordinate.y, 0, 1); in the fragment shader results in:
this is a very common issue then you do not work with power of 2 textures (pixel wise).
To not repeat here the same already provided solution, please have a look to my previous response on this matter.
Android OpenGL2.0 showing black textures
I hope this will solve your issue.
Cheers
Maurizio
Ok, I found the solution. I was again confused by the way the OpenGL thread works. I was loading the Bitmap in a thread, then using glView.post(); to post a Runnable back to the (what I thought) OpenGL thread, where the texture was supposed to be generated and bound to the bitmap.
This does not work. What I should have done is:
GLSurfaceView glView = ...;
glView.queueEvent(new Runnable() {
#Override
public void run() {
generateTexture(bitmap);
}
});
Where in generateTexture, I execute all the GLES20.generateTexture etc stuff, since with queueEvent, it's again on the real OpenGL thread, not on the UI thread.
Apparently, my code for using the texture has been correct. Thanks for your help.

How to draw with an "inverted" paint in Android Canvas?

I draw some stuff on a canvas, over I want to draw a circle in inverted color :
canvas.drawCircle(zx, zy, 8f, myPaint);
How to configure myPaint for circle pixel to be in the inverted color of the underlying pixels ?
Thanks
try this
float mx [] = {
-1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
0.0f, -1.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f, 0.0f
};
ColorMatrix cm = new ColorMatrix(mx);
p.setColorFilter(new ColorMatrixColorFilter(cm));
canvas.drawCircle(zx, zy, 8f, p);
I'd say a color matrix for inverting should look like this:
float mx [] = {
-1.0f, 0.0f, 0.0f, 0.0f, 255.0f,
0.0f, -1.0f, 0.0f, 0.0f, 255.0f,
0.0f, 0.0f, -1.0f, 0.0f, 255.0f,
0.0f, 0.0f, 0.0f, 1.0f, 0.0f
};
Here is more information for the matrix:

Texture coordinates in opengl android showing image reversed

I wrote opengl android code to show a bitmap on a square. But bitmap was drawn in reverse. When i change texture array combination to the commented code it is drawn correctly. But i insist my texture array must be as below . Am i thinking wrong ?
/** The initial vertex definition */
private float vertices[] = {
-1.0f, 1.0f, 0.0f, //Top Left
-1.0f, -1.0f, 0.0f, //Bottom Left
1.0f, -1.0f, 0.0f, //Bottom Right
1.0f, 1.0f, 0.0f //Top Right
};
/** Our texture pointer */
private int[] textures = new int[1];
/** The initial texture coordinates (u, v) */
private float texture[] = {
//Mapping coordinates for the vertices
// 1.0f, 0.0f,
// 1.0f, 1.0f,
// 0.0f, 1.0f,
// 0.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
};
/** The initial indices definition */
private byte indices[] = {
//2 triangles
0,1,2, 2,3,0,
};
Whereas Android uses the top-left corner as being 0,0 of the coordinate system, OpenGL uses the bottom-left corner being 0,0 which is why your texture gets flipped.
A common solution to this is to flip your texture at load time,
Matrix flip = new Matrix();
flip.postScale(1f, -1f);
Bitmap bmp = Bitmap.createBitmap(resource, 0, 0, resource.getWidth(), resource.getHeight(), flip, true);
Actually, I think Will Kru's solution should have flipped the background around both axes
flip.postScale(-1f, -1f);
That solution worked for me!
This worked for me. (no need to create another bitmap and scale)
private float vertices[] = {
1.0f, -1.0f, 0.0f, //Bottom Right
1.0f, 1.0f, 0.0f, //Top Right
-1.0f, 1.0f, 0.0f, //Top Left
-1.0f, -1.0f, 0.0f, //Bottom Left
};

Categories

Resources