Problems with glReadPixels in android - android

I am trying to implement an opengl picking system I read about and have hit an issue with glReadPixels. Basically, every node in the scene gets a unique color and when a new touch happens, it renders the scene with nothing but the nodes painted with their unique color ids. I am trying to check the input coordinate with a list of stored color ids.
I can not get glReadPixels to work right. It always returns 0 0 0 for pixel values. i would really appreciate any help with getting the correct pixel values from it. thanks
here is the relevant code
private void handleEvent(MotionEvent event) {
int x = (int) event.getX();
int y = (int) event.getY();
int action = event.getAction();
int actionCode = action & MotionEvent.ACTION_MASK;
if (actionCode == 0) {
// Paint with colorID color
mSettings.picking(true);
dumpEvent(event);
final GL10 gl = mSettings.getGL();
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
gl. glPixelStorei(gl.GL_UNPACK_ALIGNMENT, 1);
gl.glReadPixels(x, y, 1, 1, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, PixelBuffer);
byte b[] = new byte[4];
PixelBuffer.get(b);
String key = "" + b[0] + b[1] + b[2];
// Check for selection
mRenderer.processSelection(event, new SGColorI(pixel[0], pixel[1], pixel[2], pixel[3]));
log.pl("GL on touchdown", key);
} else if (actionCode == 2) {
mSettings.picking(false);
}
}

allocateDirect "just doesn't work" in this case. Use allocate.
It seemed very strange to me, but allocate vs allocateDirect was the only difference I came to.
Also this post in google groups helped me a lot.
btw, this discovery was made on emulator (several different versions), not the real device.

The use of glReadPixels for the classical pick problem is overkilled. Remember that glReadPixels is a blocking function, while openGL works mainly in an asynchronous way. Using glReadPixels means that all the commands in the driver queue have to be processed and this could be a huge waste of time.

Related

PixelBuffer Object and glReadPixel on Android(ARCore) blocking

I know that the default glReadPixels() waits until all the drawing commands are executed on the GL thread. But when you bind a PixelBuffer Object and then call the glReadPixels() it should be asynchronous and will not wait for anything.
But when I bind PBO and do the glReadPixels() it is blocking for some time.
Here's how I initialize the PBO:
mPboIds = IntBuffer.allocate(2);
GLES30.glGenBuffers(2, mPboIds);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(0));
GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, mPboSize, null, GLES30.GL_STATIC_READ); //allocates only memory space given data size
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(1));
GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, mPboSize, null, GLES30.GL_STATIC_READ);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
and then I use the two buffers to ping-pong around:
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(mPboIndex)); //1st PBO
JNIWrapper.glReadPixels(0, 0, mRowStride / mPixelStride, (int)height, GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE); //read pixel from the screen and write to 1st buffer(native C++ code)
//don't load anything in the first frame
if (mInitRecord) {
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
//reverse the index
mPboIndex = (mPboIndex + 1) % 2;
mPboNewIndex = (mPboNewIndex + 1) % 2;
mInitRecord = false;
return;
}
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, mPboIds.get(mPboNewIndex)); //2nd PBO
//glMapBufferRange returns pointer to the buffer object
//this is the same thing as calling glReadPixel() without a bound PBO
//The key point is that we can pipeline this call
ByteBuffer byteBuffer = (ByteBuffer) GLES30.glMapBufferRange(GLES30.GL_PIXEL_PACK_BUFFER, 0, mPboSize, GLES30.GL_MAP_READ_BIT); //downdload from the GPU to CPU
Bitmap bitmap = Bitmap.createBitmap((int)mScreenWidth,(int)mScreenHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(byteBuffer);
GLES30.glUnmapBuffer(GLES30.GL_PIXEL_PACK_BUFFER);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
//reverse the index
mPboIndex = (mPboIndex + 1) % 2;
mPboNewIndex = (mPboNewIndex + 1) % 2;
This is called in my draw method every frame.
From my understanding the glReadPixels should not take any time at all, but it's taking around 25ms (on Google Pixel 2) and creating the bitmap takes another 40ms. This only achieve like 13 FPS which is worse than glReadPixels without PBO.
Is there anything that I'm missing or wrong in my code?
EDITED since you pointed out that my original hypothesis was incorrect (initial PboIndex == PboNextIndex). Hoping to be helpful, here is C++ code that I just wrote on the native side called through JNI from Android using GLES 3. It seems to work and not block on glReadPixels(...). Note there is only a single glPboIndex variable:
glBindBuffer(GL_PIXEL_PACK_BUFFER, glPboIds[glPboIndex]);
glReadPixels(0, 0, frameWidth_, frameHeight_, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glPboReady[glPboIndex] = true;
glPboIndex = (glPboIndex + 1) % 2;
if (glPboReady[glPboIndex]) {
glBindBuffer(GL_PIXEL_PACK_BUFFER, glPboIds[glPboIndex]);
GLubyte* rgbaBytes = (GLubyte*)glMapBufferRange(
GL_PIXEL_PACK_BUFFER, 0, frameByteCount_, GL_MAP_READ_BIT);
if (rgbaBytes) {
size_t minYuvByteCount = frameWidth_ * frameHeight_ * 3 / 2; // 12 bits/pixel
if (videoFrameBufferSize_ < minYuvByteCount) {
return; // !!! not logging error inside render loop
}
convertToVideoYuv420NV21FromRgbaInverted(
videoFrameBufferAddress_, rgbaBytes,
frameWidth_, frameHeight_);
}
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glPboReady[glPboIndex] = false;
}
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
...
previous unfounded hypothesis:
Your question doesn't show the code that sets the initial values of mPboIndex and mPboNewIndex, but if they are set to identical initial values, such as 0, then they will have matching values within each loop which will result in mapping the same PBO that has just been read. In that hypothetical/real scenario, even if 2 PBOs are being used, they are not alternated between glReadPixels and glMapBufferRange which will then block until the GPU completes data transfer. I suggest this change to ensure that the PBOs alternate:
mPboNewIndex = mPboIndex;
mPboIndex = (mPboNewIndex + 1) % 2;

Custom ImageView, ImageMatrix.mapPoints() and invert() inaccurate?

I wrote a custom ImageView (Android) for my app which works fine. For several purposes I need to map view- coordinates (e.g. coordinates of a click) to image- coordinates (where in the image did the click happen), as the image can be zoomed and scrolled. So far the methods below seemed to work fine:
private PointF viewPointToImgPointF(PointF viewPoint) {
final float[] coords = new float[] { viewPoint.x, viewPoint.y };
Matrix matrix = new Matrix();
getImageMatrix().invert(matrix);
matrix.mapPoints(coords); // --> PointF in image as scaled originally
if (coords != null && coords.length > 1) {
return new PointF(coords[0] * inSampleSize,coords[1] * inSampleSize);
} else {
return null;
}
}
private PointF imgPointToViewPointF(PointF imgPoint) {
final float[] coords = new float[] { imgPoint.x / inSampleSize, imgPoint.y / inSampleSize };
getImageMatrix().mapPoints(coords);
if (coords != null && coords.length > 1) {
return new PointF(coords[0], coords[1]);
} else {
return null;
}
}
In most cases I can use these methods without problems but now I noticed that they are not 100% accurate as I tried out the following:
PointF a = new PointF(100,100);
PointF b = imgPointToViewPointF(a);
PointF a2 = viewPointToImgPointF(b);
PointF b2 = imgPointToViewPointF(a2);
PointF a3 = viewPointToImgPointF(b2);
and got these values:
a: Point(100, 100)
b: Point(56, 569)
a2: Point(99, 99)
b2: Point(55, 568)
a3: Point(97, 97)
If everything worked correctly all a- values and all b- values should have stayed the same.
I also found out, that this little difference is decreasing towards the center of the image. If (100, 100) would be the center of the image, the methods would deliver correct results!
Has anybody experienced something similar and maybe even has a solution? Or do I do something wrong?
(inSampleSize is == 1 for the image I tested. it represent the factor that the image has been resized by to save memory)
I haven't checked this exact scenario but it looks like your problem is with repeatedly casting the float back to an int and then using that value as a float again in the second iteration. The loss of detail in the cast will be compounded each time.
Ok, it seems like it really has been a bug in Android < 4.4.
It works now on devices that were updated but still happens on devices with 4.3 and older.
So I assume there is not much I can do to avoid it on older devices (except for doing the whole matrix calculations by myself. Which I surely won't...)

Fingerprint Comparing Code

I have saved the fingerprint impression in sqlite database as bitmap . Can anybody please help me with source code or link of android code to compare two fingerprint impressions as bitmap . To match for equality.
I have tried with following code . But it matches with all the fingerprint impression stored in database.
public boolean compare(Bitmap imageToCompare , Bitmap imageInDb )
{
System.out.println("Inside Compare");
System.out.println("imageToCompare::::"+imageToCompare);
System.out.println("imageInDb::::"+imageInDb);
/*int width = imageToCompare.getWidth();
System.out.println("width::::::"+width);
int height = imageToCompare.getHeight();
System.out.println("height::::"+height);
int pixelCount = width * height;
int width1 = imageInDb.getWidth();
System.out.println("width1::::::"+width1);
int height1 = imageInDb.getHeight();
System.out.println("height1::::"+height1);*/
int pixelCount = mImageWidth * mImageHeight;
System.out.println("pixelCount::::"+pixelCount);
int[] pixels1 = new int[pixelCount];
int[] pixels2 = new int[pixelCount];
System.out.println("11111111111111111");
//imageToCompare.getPixels(pixels1, 0, 0, 0, width, height);
imageToCompare.getPixels(pixels1, 0,mImageWidth, 0, 0, mImageWidth, mImageHeight);
imageInDb.getPixels(pixels2, 0,mImageWidth, 0,0, mImageWidth, mImageHeight);
System.out.println("22222222222222");
for (int i = 0; i < pixelCount; i++) {
if (pixels1[i] != pixels2[i]) {
System.out.println("333333333333");
return false;
}
}
System.out.println("444444444444444444");
return true;
}
thanks
You can not do the comparison like string comparison or pixel-wise comparison. It is possible, only if you have the same fingerprint image again and again. But in the reality, When you have the fingerprint two times, both the images will be different from each other with little bit position change, angle change, and quality of the scan. So String/pixel comparison is not possible.
All the scanners do provide the SDK for capturing the fingerprint and comparing them (1:1) which you can use for developing the desktop application. If you need to compare the scanned images at the server, then you need to implement your own Automatic Finger Identification algorithm or you have to use third party services such as
CAMS
M2Sys
Neurotechnology
To compare two bitmaps you can use openCV library for android. There are many functions for image compering and this topic is wide and depends on percision that you need. You can start with cvNorm function.
In link below you will have openCV library and samples compiled under Java, so you don't have to write native code in C.
link
openCV website

Android Color Picking Not Getting Correct Colors

I implemented a simple color picker for my application, that is working correctly except the objects are being drawn with not there exact color id. So color IDs of 22,0,0, 23,0,0, 24,0,0 might be picked up by the glReadPixles as 22,0,0. I also tried disable dithering, but not sure if there is another gl setting I have to disable or enable to get the objects to draw as there exact color id.
if(picked)
{
GLES10.glClear(GLES10.GL_COLOR_BUFFER_BIT | GLES10.GL_DEPTH_BUFFER_BIT);
GLES10.glDisable(GLES10.GL_TEXTURE_2D);
GLES10.glDisable(GLES10.GL_LIGHTING);
GLES10.glDisable(GLES10.GL_FOG);
GLES10.glPushMatrix();
Camera.Draw(gl);
for(Actor actor : ActorManager.actors)
{
actor.Picking();
}
ByteBuffer pixels = ByteBuffer.allocate(4);
GLES10.glReadPixels(x, (int)_height - y, 1, 1, GLES10.GL_RGBA, GLES10.GL_UNSIGNED_BYTE, pixels);
for(Actor actor : ActorManager.actors)
{
if(actor._colorID[0] == (pixels.get(0) & 0xff) && actor._colorID[1] == (pixels.get(1) & 0xff) && actor._colorID[2] == (pixels.get(2) & 0xff))
{
actor._location.y += -1;
}
}
GLES10.glPopMatrix();
picked = false;
}
public void Picking()
{
GLES10.glPushMatrix();
GLES10.glTranslatef(_location.x, _location.y, _location.z);
GLES10.glVertexPointer(3, GLES10.GL_FLOAT, 0, _vertexBuffer);
GLES10.glColor4f((_colorID[0] & 0xff)/255.0f,( _colorID[1] & 0xff)/255.0f, (_colorID[2] & 0xff)/255.0f, 1.0f);
GLES10.glDrawElements(GLES10.GL_TRIANGLES, _numIndicies,
GLES10.GL_UNSIGNED_SHORT, _indexBuffer);
GLES10.glPopMatrix();
}
Looks like android doesn't default to 8888 color mode so this was causing the problem of the colors not drawing correctly for me. By setting up the correct color mode in the activity, I solved the problem for now as long as the device supports it. I will probably have to find a way later to draw to another buffer or texture later if I want my program to support more devices, but this will work for now.
_graphicsView.getHolder().setFormat(PixelFormat.RGBA_8888);

Android - How do I get raw touch screen information?

I'm working on a painting application for Android and I'd like to use raw data from the device's touch screen to adjust the user's paint brush as they draw. I've seen other apps for Android (iSteam, for example) where the size of the brush is based on the size of your fingerprint on the screen. As far as painting apps go, that would be a huge feature.
Is there a way to get this data? I've googled for quite a while, but I haven't found any source demonstrating it. I know it's possible, because Dolphin Browser adds multi-touch support to the Hero without any changes beneath the application level. You must be able to get a 2D matrix of raw data or something...
I'd really appreciate any help I can get!
There are some properties in the Motion Event class. You can use the getSize() method to find the size of the object. The Motion Event class also gives access to pressure, coordinates etc...
If you check the APIDemos in the SDK there's a simple paitning app called TouchPaint
package com.example.android.apis.graphics;
It uses the following to draw on the canvas
#Override public boolean onTouchEvent(MotionEvent event) {
int action = event.getAction();
mCurDown = action == MotionEvent.ACTION_DOWN
|| action == MotionEvent.ACTION_MOVE;
int N = event.getHistorySize();
for (int i=0; i<N; i++) {
//Log.i("TouchPaint", "Intermediate pointer #" + i);
drawPoint(event.getHistoricalX(i), event.getHistoricalY(i),
event.getHistoricalPressure(i),
event.getHistoricalSize(i));
}
drawPoint(event.getX(), event.getY(), event.getPressure(),
event.getSize());
return true;
}
private void drawPoint(float x, float y, float pressure, float size) {
//Log.i("TouchPaint", "Drawing: " + x + "x" + y + " p="
// + pressure + " s=" + size);
mCurX = (int)x;
mCurY = (int)y;
mCurPressure = pressure;
mCurSize = size;
mCurWidth = (int)(mCurSize*(getWidth()/3));
if (mCurWidth < 1) mCurWidth = 1;
if (mCurDown && mBitmap != null) {
int pressureLevel = (int)(mCurPressure*255);
mPaint.setARGB(pressureLevel, 255, 255, 255);
mCanvas.drawCircle(mCurX, mCurY, mCurWidth, mPaint);
mRect.set(mCurX-mCurWidth-2, mCurY-mCurWidth-2,
mCurX+mCurWidth+2, mCurY+mCurWidth+2);
invalidate(mRect);
}
mFadeSteps = 0;
}
Hope that helps :)
I'm working on something similar, and I'd suggest looking at the Canvas and Paint classes as well. Looking at getHistorySize() in Motion Event might also be helpful for figuring out how long a particular stroke has been in play.

Categories

Resources