Problems with glDrawTex_OES - android

I'm not sure I'm using glDrawTex_OES correctly. In fact, I'm sure I'm not because I'm getting a black rectangle on the screen rather than the intended texture.
To get the silly points out of the way: Yes, the GL Extension string contains the OES_draw_texture token. Yes, the texture is loaded into memory/etc. correctly: it displays just fine if I map it to a polygon.
From reading the various bits of documentation I can find for it, it looks like I need to "configur[e] the texture crop rectangle ... via TexParameteriv() with pname equal to TEXTURE_CROP_RECT_OES". According to this post in the khronos forums (best documentation google can find for me), the values for that are "Ucr, Vcr, Wcr, Hcr. That is, left/bottom/width/height"
Here's the render code:
void SetUpCamera( int windowWidth, int windowHeight, bool ortho ) // only called once
{
glViewport( 0, 0, windowWidth, windowHeight );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
if( ortho )
{
float aspect = (float)windowWidth / (float)windowHeight;
float halfHeight = 32;
glOrthof( -halfHeight*aspect, halfHeight*aspect, -halfHeight, halfHeight, 1.0f, 100.0f );
}
else
{
mygluPerspective( 45.0f, (float)windowWidth / (float)windowHeight, 1.0f, 100.0f ); // I don't _actually_ have glu, but that's irrelevant--this does what you'd expect.
}
}
void DrawTexture( int windowWidth, int windowHeight, int texID )
{
// Clear back/depth buffer
glClearColor( 1, 1, 1, 1 );
glClearDepth( 1.0f );
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// Next 2 lines probably not necessary, but, you know, sanity check:
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
// set up texture
glBindTexture( GL_TEXTURE_2D, texID ); // texID is the glGenTexture result for a 64x64 2D RGB_8 texture - nothing crazy.
GLint coords [] = {0, 0, 64, 64};
glTexParameteriv( GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, coords );
// blit? =(
glDrawTexiOES( windowWidth / 2 - 32, windowHeight - 70, 10, 64, 64 );
}
Am I missing something obvious? Doing anything just plain dumb?

Apparently the problem is that you're (or I am, as the case may be) not calling glColor() with all full color channels first. This seems like a bug in the implementation I'm working with (Android, NDK), as the default Color should be (1, 1, 1, 1). Yet calling glColor4f(1, 1, 1, 1) is enough to fix things.
(I have no other calls to glColor in my program. Not sure if there's any other way to update the current color, but the textures render correctly using the default color and drawing them with glEnableClientState(GL_TEXTURE_COORD_ARRAY) / glDrawArrays() )

Related

Convert all colors other than a particular color in a bitmap to white

I am using tess-two library and I wish to convert all the colors other than black in my image to white (Black will be text). Thus making it easier for the tess-two to read the text. I have tried various methods but they are taking too much time as they convert pixel by pixel. Is there a way to achieve this using canvas or anything that give results faster.
UPDATE
Another problem that came up with this algorithm is that printer doesn't print with the same BLACK and White as in android. So the algorithm converts the whole picture to white.
Pixel by pixel method that I am currently using.
binarizedImage = convertToMutable(cropped);// the bitmap is made mutable
int width = binarizedImage.getWidth();
int height = binarizedImage.getHeight();
int[] pixels = new int[width * height];
binarizedImage.getPixels(pixels, 0, width, 0, 0, width, height);
for(int i=0;i<binarizedImage.getWidth();i++) {
for(int c=0;c<binarizedImage.getHeight();c++) {
int pixel = binarizedImage.getPixel(i, c);
if(!(pixel == Color.BLACK || pixel == Color.WHITE))
{
int index = c * width + i;
pixels[index] = Color.WHITE;
binarizedImage.setPixels(pixels, 0, width, 0, 0, width, height);
}
}
}
Per, Rishabh's comment. Use a color matrix. Since black is black and is RGB(0,0,0,255), it's immune to multiplications. So if you multiply everything by 255 in all channels everything will exceed the limit and get crimped to white, except for black which will stay black.
ColorMatrix bc = new ColorMatrix(new float[] {
255, 255, 255, 0, 0,
255, 255, 255, 0, 0,
255, 255, 255, 0, 0,
0, 0, 0, 1, 0,
});
ColorMatrixColorFilter filter = new ColorMatrixColorFilter(bc);
paint.setColorFilter(filter);
You can use that paint to paint that bitmap in only-black-stays-black colormatrix filter glory.
Note: This is a quick and awesome trick, but, it will ONLY work for black. While it's perfect for your use and will turn that lengthy op into something that is instant, it does not actually conform to the title question of "a particular color" my algorithm works in any color you want, so long as it is black.
Though #Tatarize answer was perfect I was having troubles reading a printed image as its not always jet black.
This algorithm which i found on stack overflow works great, it actually checks whether the particular pixel is closer to black or white and converts the pixel to the closest color. Hence providing binarization with range. (https://stackoverflow.com/a/16534187/3710223).
What I am doing now is keeping the unwanted areas in light colors while text in black. This algorithm gives binarized image in approximately 20-35 sec. Still not that fast but efficient.
private static boolean shouldBeBlack(int pixel) {
int alpha = Color.alpha(pixel);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
if(alpha == 0x00) //if this pixel is transparent let me use TRASNPARENT_IS_BLACK
return TRASNPARENT_IS_BLACK;
// distance from the white extreme
double distanceFromWhite = Math.sqrt(Math.pow(0xff - redValue, 2) + Math.pow(0xff - blueValue, 2) + Math.pow(0xff - greenValue, 2));
// distance from the black extreme
double distanceFromBlack = Math.sqrt(Math.pow(0x00 - redValue, 2) + Math.pow(0x00 - blueValue, 2) + Math.pow(0x00 - greenValue, 2));
// distance between the extremes
double distance = distanceFromBlack + distanceFromWhite;
return ((distanceFromWhite/distance)>SPACE_BREAKING_POINT);
}
If the return value is true then we convert the pixel to black else we convert it to white.
I know there can be better/faster answers and more answers are welcomed :)
Same thing but done in renderscript, times about 60-100ms. You won't even notice the delay.
Bitmap blackbitmap = Bitmap.createBitmap(bitmap.getWidth(),bitmap.getHeight(),bitmap.getConfig());
RenderScript mRS = RenderScript.create(TouchEmbroidery.activity);
ScriptC_blackcheck script = new ScriptC_blackcheck(mRS);
Allocation allocationRaster0 = Allocation.createFromBitmap(
mRS,
bitmap,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT
);
Allocation allocationRaster1 = Allocation.createTyped(mRS, allocationRaster0.getType());
script.forEach_root(allocationRaster0, allocationRaster1);
allocationRaster1.copyTo(blackbitmap);
Does the allocation, uses renderscript to write out the data to blackbitmap.
#pragma version(1)
#pragma rs java_package_name(<YOUR PACKAGENAME GOES HERE>)
void root(const uchar4 *v_in, uchar4 *v_out) {
uint32_t value = (v_in->r * v_in->r);
value = value + (v_in->g * v_in->g);
value = value + (v_in->b * v_in->b);
if (value > 1200) {
v_out->r = 255;
v_out->g = 255;
v_out->b = 255;
}
else {
v_out->r = 0;
v_out->g = 0;
v_out->b = 0;
}
v_out->a = 0xFF;
}
Note the 1200 is just the threshold I used, should be all three components less than 20 (or, like 0, 0, sqrt(1200) aka (~34)). You can set the 1200 limit up or down accordingly.
And the build gradle needs Renderscript:
renderscriptTargetApi 22
Last few things of the build tools claims to have fixed a bunch of the renderscript headaches. So it might be perfectly reasonable to do this kind of stuff in mission critical places like yours. 20 seconds is too long to wait, 60 milliseconds is not.

Coin detection using android opencv

I am trying to detect coin ( circle ) detection using Opencv4Android.
So far I have tried two approaches
1 ) Regular method :
// convert image to grayscale
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
// apply Gaussian Blur
Imgproc.GaussianBlur(mGray, mGray, sSize5, 2, 2);
iMinRadius = 20;
iMaxRadius = 400;
iAccumulator = 300;
iCannyUpperThreshold = 100;
//apply houghCircles
Imgproc.HoughCircles(mGray, mIntermediateMat, Imgproc.CV_HOUGH_GRADIENT, 2.0, mGray.rows() / 8,
iCannyUpperThreshold, iAccumulator, iMinRadius, iMaxRadius);
if (mIntermediateMat.cols() > 0)
for (int x = 0; x < Math.min(mIntermediateMat.cols(), 10); x++) {
double vCircle[] = mIntermediateMat.get(0,x);
if (vCircle == null)
break;
pt.x = Math.round(vCircle[0]);
pt.y = Math.round(vCircle[1]);
radius = (int)Math.round(vCircle[2]);
// draw the found circle
Core.circle(mRgba, pt, radius, colorRed, iLineThickness);
}
2 ) Sobel and then Hough Cicles
// apply Gaussian Blur
Imgproc.GaussianBlur(mRgba, mRgba, sSize3, 2, 2,
Imgproc.BORDER_DEFAULT);
// / Convert it to grayscale
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
// / Gradient X
Imgproc.Sobel(mGray, grad_x, CvType.CV_16S, 1, 0, 3, scale, delta,
Imgproc.BORDER_DEFAULT);
Core.convertScaleAbs(grad_x, abs_grad_x);
// / Gradient Y
Imgproc.Sobel(mGray, grad_y, CvType.CV_16S, 0, 1, 3, scale, delta,
Imgproc.BORDER_DEFAULT);
Core.convertScaleAbs(grad_y, abs_grad_y);
// / Total Gradient (approximate)
Core.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
iCannyUpperThreshold = 100;
Imgproc.HoughCircles(grad, mIntermediateMat,
Imgproc.CV_HOUGH_GRADIENT, 2.0, grad.rows() / 8,
iCannyUpperThreshold, iAccumulator, iMinRadius, iMaxRadius);
if (mIntermediateMat.cols() > 0)
for (int x = 0; x < Math.min(mIntermediateMat.cols(), 10); x++) {
double vCircle[] = mIntermediateMat.get(0, x);
if (vCircle == null)
break;
pt.x = Math.round(vCircle[0]);
pt.y = Math.round(vCircle[1]);
radius = (int) Math.round(vCircle[2]);
// draw the found circle
Core.circle(mRgba, pt, radius, colorRed, iLineThickness);
}
method one gives fair result in case of coin detection and method two gives better result
Out of these two methods second method processing is slow but results are good
Both of these methods are working when camera frmae is caputured using JavaCameraView or NativeCameraView from opencv library .
If I use same procedure on image captured from android naive image capture intent which returns Bitmap , I am unable to get any results at all i.e. no circles are detected at all.
In methods one sometimes I get circle detected when using Bitmap captured using android camera intent.
I also tried changing the captured bitmap as suggested in this Post but still no circle detection.
Can anybody tell me what modifications I have to do.
And also I want to know which algorithm will give better results in coin ( circle ) detection but with less processing.
I have played with various values of houghCircle method and also tried canny edge out put as intput to houghCircles but its not considerably good enough.

Android 3D Surface Plot

My requirement is to create a 3d surface plot(should also display the x y z axis) from a list of data points (x y z) values.The 3d visualization should be done on ANDROID.
My Inputs : Currently planning on using open gl 1.0 and java. I m also considering Adore3d , min3d and rgl package which uses open gl 1.0. Good at java ,but a novice at 3d programming.
Time Frame : 2 months
I would like to know the best way to go about it? Is opengl 1.0 good for 3d surface plotting?Any other packages/libraries that can be used with Android?
Well, you can plot the surface using OpenGL 1.0 or OpenGL 2.0. All you need to do is to draw the axes as lines and draw the surface as triangles. If you have your heightfield data, you would do:
float[][] surface;
int width, height; // 2D surface data and it's dimensions
GL.glBegin(GL.GL_LINES);
GL.glVertex3f(0, 0, 0); // line starting at 0, 0, 0
GL.glVertex3f(width, 0, 0); // line ending at width, 0, 0
GL.glVertex3f(0, 0, 0); // line starting at 0, 0, 0
GL.glVertex3f(0, 0, height); // line ending at 0, 0, height
GL.glVertex3f(0, 0, 0); // line starting at 0, 0, 0
GL.glVertex3f(0, 50, 0); // line ending at 0, 50, 0 (50 is maximal value in surface[])
GL.glEnd();
// display the axes
GL.glBegin(GL.GL_TRIANGLES);
for(int x = 1; x < width; ++ x) {
for(int y = 1; y < height; ++ y) {
float a = surface[x - 1][y - 1];
float b = surface[x][y - 1];
float c = surface[x][y];
float d = surface[x - 1][y];
// get four points on the surface (they form a quad)
GL.glVertex3f(x - 1, a, y - 1);
GL.glVertex3f(x, b, y - 1);
GL.glVertex3f(x, c, y);
// draw triangle abc
GL.glVertex3f(x - 1, a, y - 1);
GL.glVertex3f(x, c, y);
GL.glVertex3f(x - 1, d, y);
// draw triangle acd
}
}
GL.glEnd();
// display the data
This draws simple axes and heightfield, all in white color. It should be pretty straight forward to extend it from here.
Re the second part of your question:
Any other packages/libraries that can be used with Android?
Yes, it's now possible to draw an Android 3D Surface Plot with SciChart.
Link to Android Chart features page
Link to Android 3D Surface Plot example
Lots of configurations are possible including drawing wireframe, gradient colour maps, contours and real-time updates.
Disclosure, I'm the tech lead on the scichart team

gluUnProject always returns zero

I need gluUnProject to convert screen coordinates to world coordinates and right now I just about have it working. When my app runs it accurately tells me the coordinates on screen which I know are stored in my renderer thread and then pumps out screen coordinates. Unfortunately the screen coordinates seem to have no effect of world coordinates and the world coordinates remain at zero.
Here is my gluUnProject method
public void vector3 (GL11 gl){
int[] viewport = new int[4];
float[] modelview = new float[16];
float[] projection = new float[16];
float winx, winy, winz;
float[] newcoords = new float[4];
gl.glGetIntegerv(GL11.GL_VIEWPORT, viewport, 0);
((GL11) gl).glGetFloatv(GL11.GL_MODELVIEW_MATRIX, modelview, 0);
((GL11) gl).glGetFloatv(GL11.GL_PROJECTION_MATRIX, projection, 0);
winx = (float)setx;
winy = (float)viewport[3] - sety;
winz = 0;
GLU.gluUnProject(setx, (float)viewport[3] - sety, 0, modelview, 0, projection, 0, viewport, 0, newcoords, 0);
posx = (int)newcoords[0];
posy = (int)newcoords[1];
posz = (int)newcoords[2];
Log.d(TAG, "x= " + String.valueOf(posx));
Log.d(TAG, "y= " + String.valueOf(posy));
Log.d(TAG, "z= " + String.valueOf(posz));
}
Now I've searched and found this forum post and they came to the conclusion that it was to do with using getFloatv instead of getDoublev, but getDoublev does not seem to be supported by GL11
The method glGetDoublev(int, float[], int) is undefined for the type GL11
and also
The method glGetDoublev(int, double[], int) is undefined for the type GL11
should the double and float thing matter and if so how do I go about using doubles
Thank you
EDIT:
I was told that gluUnproject fails when too close to the near far clipping plane so I set winz to -5 when near is 0 and far is -10. This had no effect on the output.
I also logged each part of the newcoords[] array and they all return something that is NaN (not a number) could this be the problem or something higher up in the algorithm
I'm guessing you're working on the emulator? Its OpenGL implementation is rather buggy, and after testing I found that it returns all zeroes for the following calls:
gl11.glGetIntegerv(GL11.GL_VIEWPORT, viewport, 0);
gl11.glGetFloatv(GL11.GL_MODELVIEW_MATRIX, modelview, 0);
gl11.glGetFloatv(GL11.GL_PROJECTION_MATRIX, projection, 0);
The gluUnProject() function needs to calculate the inverse of the combined modelview-projection matrix, and since these are all zeroes, the inverse does not exist and will consist of NaNs. The resulting newcoords vector is therefor also all Nans.
Try it on a device with a proper OpenGL implementation, it should work. Keep in mind to still divide by newcoords[3] though ;-)

Android draw bitmap on polygons

I am trying to draw a bitmap on a polygon has sides more than 4. ı am dealing with opengl to do this but i realised in 2d there is a method called drawBitmapMesh in Canvas for this. It worked for 4 side polygon but not working for 5.
This works
float verts[] = {0,0, 0,10, 0,20 ,0,30, 10,0, 10,10, 10,20, 10,30, 20,0, 20,10, 20,20, 20,30, 30,0, 30,10, 30,20, 30,30};
canvas.drawBitmapMesh(bitmap, 3, 3, verts, 0, null, 0, null);
This does not work gives runtime error.
float verts[] = {0,0, 0,10, 0,20 ,0,30, 0,40, 10,0, 10,10, 10,20, 10,30,10,40, 20,0, 20,10, 20,20, 20,30,20,40, 30,0, 30,10, 30,20, 30,30,30,40};
canvas.drawBitmapMesh(bitmap, 4, 4, verts, 0, null, 0, null);
From the SDK documentation:
verts Array of x,y pairs, specifying where the mesh should be drawn. There must be at least (meshWidth+1) * (meshHeight+1) * 2 + meshOffset values in the array
You have 38 values in your array, whereas the above calculation with the parameters gives: (4+1)*(4+1)*2 + 0 = 50 values ...

Categories

Resources