Android TextureView Flicking - android

I have tried Romain Guy's TextureView sample code (http://pastebin.com/J4uDgrZ8), it works great. But when I change lockCanvas(null) into lockCanvas(new Rect(x, y, x+20, y+20)), the example starts to flicker.
It seems 'lockCanvas(Rect)' can not work well for TextureView or any other reasons?
I am using Motorola XOOM with android 4.0.3.
Thanks for any help!
the code i modified is as follows:
public void run() {
float x = 0.0f;
float y = 0.0f;
float speedX = 5.0f;
float speedY = 3.0f;
Paint paint = new Paint();
paint.setColor(0xff00ff00);
while (mRunning && !Thread.interrupted()) {
//final Canvas canvas = mSurface.lockCanvas(null);
**final Canvas canvas = mSurface.lockCanvas(new Rect((int)x, (int)y,
(int)(x+20.0f), (int)(y+20.0f)));**
try {
canvas.drawColor(0x00000000, PorterDuff.Mode.CLEAR);
canvas.drawRect(x, y, x + 20.0f, y + 20.0f, paint);
} finally {
mSurface.unlockCanvasAndPost(canvas);
}
if (x + 20.0f + speedX >= mSurface.getWidth() || x + speedX <= 0.0f) {
speedX = -speedX;
}
if (y + 20.0f + speedY >= mSurface.getHeight() || y + speedY <= 0.0f) {
speedY = -speedY;
}
x += speedX;
y += speedY;
try {
//Thread.sleep(15);
**Thread.sleep(1);**
} catch (InterruptedException e) {
// Interrupted
}
}
I checked it carefully, found that it is "Thread.sleep(1)" together with "lockCanvas(Rect)" leads to the flicker. When use lockCanvas(null), sleep(1) is OK. So lockCanvas(Rect) can not be refreshed as fast as lockCanvas(null)??

Related

How to capture detected face image using MLKit firebase FaceDetection

I want to only capture detected face image not whole image on camerasource.
I am drowing box like this
public void draw(Canvas canvas) {
Face face = mFace;
if (face == null) {
return;
}
// Draws a circle at the position of the detected face, with the face's track id below.
float x = translateX(face.getPosition().x + face.getWidth() / 2);
float y = translateY(face.getPosition().y + face.getHeight() / 2);
canvas.drawCircle(x, y, FACE_POSITION_RADIUS, mFacePositionPaint);
canvas.drawText("id: " + mFaceId, x + ID_X_OFFSET, y + ID_Y_OFFSET, mIdPaint);
canvas.drawText("happiness: " + String.format("%.2f", face.getIsSmilingProbability()), x - ID_X_OFFSET, y - ID_Y_OFFSET, mIdPaint);
canvas.drawText("right eye: " + String.format("%.2f", face.getIsRightEyeOpenProbability()), x + ID_X_OFFSET * 2, y + ID_Y_OFFSET * 2, mIdPaint);
canvas.drawText("left eye: " + String.format("%.2f", face.getIsLeftEyeOpenProbability()), x - ID_X_OFFSET*2, y - ID_Y_OFFSET*2, mIdPaint);
// Draws a bounding box around the face.
float xOffset = scaleX(face.getWidth() / 2.0f);
float yOffset = scaleY(face.getHeight() / 2.0f);
float left = x - xOffset;
float top = y - yOffset;
float right = x + xOffset;
float bottom = y + yOffset;
this.faceTrackingCords.y = top;
this.faceTrackingCords.width = right;
this.faceTrackingCords.x = left-right;
this.faceTrackingCords.height = bottom;
canvas.drawRect(left, top, right, bottom, mBoxPaint);
}
and trying to capture image like this but cropped image is not as expected.
mCameraSource.takePicture(null, new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
Bitmap bitmapCropped = Bitmap.createBitmap(bitmap,(int)faceTrackingCords.x, (int)faceTrackingCords.y,
(int)faceTrackingCords.width, (int)faceTrackingCords.height);
screenImage.setImageBitmap(bitmapCropped);
}
});
Kindly please help

Filling the gap between 2 paths on a canvas

Im developing an android app with the help of dlib, i have a code that with the help of the 68 face landmarks i draw a path for the outer part of the lips and another one for the inner path of the lips is there any way that with 2 different paths i can fill the gap in between them?
i've seen methods like, fill the outer one and fill the inside one with white so it looks like you're only painting the outside part but i cant do that since im painting it over the an image and i just want to color the lips, is there any way to accomplish this?
This is the code i used to paint the lips:
for (final VisionDetRet ret : results) {
// Draw landmark
int x = 0;
ArrayList<android.graphics.Point> landmarks = ret.getFaceLandmarks();
int ppX = 0, ppY = 0;
Paint mFaceLandmardkPaint = new Paint();
mFaceLandmardkPaint.setColor(Color.GREEN);
mFaceLandmardkPaint.setStrokeWidth(2);
mFaceLandmardkPaint.setStyle(Paint.Style.STROKE);
Path pth = new Path();
pth.setFillType(Path.FillType.EVEN_ODD);
Path pth2 = new Path();
pth2.setFillType(Path.FillType.EVEN_ODD);
for (android.graphics.Point point : landmarks) {
if (x >= 48 && x <= 60) {
int pointX = (int) (point.x);
int pointY = (int) (point.y);
if (x != 48) {
//canvas.drawLine(pointX, pointY, ppX, ppY, mFaceLandmardkPaint);
pth.lineTo(pointX, pointY);
} else {
pth.moveTo(pointX, pointY);
}
ppX = pointX;
ppY = pointY;
//canvas.drawCircle(pointX, pointY, 1, mFaceLandmardkPaint);
} else if (x > 60) {
pth.close();
}
if (x >= 61) {
int pointX = (int) (point.x);
int pointY = (int) (point.y);
if (x != 61) {
//canvas.drawLine(pointX, pointY, ppX, ppY, mFaceLandmardkPaint);
pth2.lineTo(pointX, pointY);
} else {
pth2.moveTo(pointX, pointY);
}
ppX = pointX;
ppY = pointY;
//canvas.drawCircle(pointX, pointY, 1, mFaceLandmardkPaint);
}
x++;
}
pth2.close();
Paint red = new Paint();
red.setColor(android.graphics.Color.RED);
red.setStyle(Paint.Style.FILL);
c.drawPath(pth2, mFaceLandmardkPaint);
c.drawPath(pth, red);
}
This code generates a full painted mouth.
Is there any way to accomplish this with native canvas, paint and paths?

Android Gyroscope for tilting

Thanks in advance for your help first.
I have found so many examples using Gyroscope. But I couldn't find adequate one for me.
I'd like to make a simple quiz game that do actions when I tilt VM to 90 degrees forward and backward. Many examples said I might use "pitch" value of Gyroscope. Could you give some advices for me??
I have done a similar thing where i need draw a rectangle with includes nearby places and must point it to the place and show details.
public void onSensorChanged(SensorEvent event) {
final Handler handler = new Handler();
switch (event.sensor.getType()) {
case Sensor.TYPE_ACCELEROMETER:
mAcceleromterReading =
SensorUtilities.filterSensors(event.values, mAcceleromterReading);
break;
case Sensor.TYPE_MAGNETIC_FIELD:
mMagnetometerReading =
SensorUtilities.filterSensors(event.values, mMagnetometerReading);
break;
float[] orientation =
SensorUtilities.computeDeviceOrientation(mAcceleromterReading, mMagnetometerReading);
if (orientation != null) {
float azimuth = (float) Math.toDegrees(orientation[0]);
if (azimuth < 0) {
azimuth += 360f;
}
// Convert pitch and roll from radians to degrees
float pitch = (float) Math.toDegrees(orientation[1]);
float roll = (float) Math.toDegrees(orientation[2]);
if (abs(pitch - pitchPrev) > PITCH_THRESHOLD && abs(roll - rollPrev) > ROLL_THRESHOLD
&& abs(azimuth - azimuthPrev) > AZIMUTH_THRESHOLD) { // && abs(roll - rollPrev) > rollThreshold
if (DashplexManager.getInstance().mlocation != null) {
mOverlayDisplayView.setHorizontalFOV(mPreview.getHorizontalFOV());
mOverlayDisplayView.setVerticalFOV(mPreview.getVerticalFOV());
mOverlayDisplayView.setAzimuth(azimuth);
mOverlayDisplayView.setPitch(pitch);
mOverlayDisplayView.setRoll(roll);
// Update the OverlayDisplayView to red raw when sensor dataLogin changes,
// redrawing only when the camera is not pointing straight up or down
if (pitch <= 75 && pitch >= -75) {
//Log.d("issueAR", "invalidate: ");
mOverlayDisplayView.invalidate();
}
}
pitchPrev = pitch;
rollPrev = roll;
azimuthPrev = azimuth;
}
}
computeDeviceOrientation method
public static float[] computeDeviceOrientation(float[] accelerometerReading, float[] magnetometerReading) {
if (accelerometerReading == null || magnetometerReading == null) {
return null;
}
final float[] rotationMatrix = new float[9];
SensorManager.getRotationMatrix(rotationMatrix, null, accelerometerReading, magnetometerReading);
// Remap the coordinates with the camera pointing along the Y axis.
// This way, portrait and landscape orientation return the same azimuth to magnetic north.
final float cameraRotationMatrix[] = new float[9];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_X,
SensorManager.AXIS_Z, cameraRotationMatrix);
final float[] orientationAngles = new float[3];
SensorManager.getOrientation(cameraRotationMatrix, orientationAngles);
// Return a float array containing [azimuth, pitch, roll]
return orientationAngles;
}
onDraw method
#SuppressLint("DrawAllocation")
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
// Log.d("issueAR", "onDraw: ");
// Log.d("issueAR", "mVerticalFOV: "+mVerticalFOV+" "+"mHorizontalFOV"+mHorizontalFOV);
// Get the viewports only once
if (!mGotViewports && mVerticalFOV > 0 && mHorizontalFOV > 0) {
mViewportHeight = canvas.getHeight() / mVerticalFOV;
mViewportWidth = canvas.getWidth() / mHorizontalFOV;
mGotViewports = true;
//Log.d("onDraw", "mViewportHeight: " + mViewportHeight);
}
if (!mGotViewports) {
return;
}
// Set the paints that remain constant only once
if (!mSetPaints) {
mTextPaint.setTextAlign(Paint.Align.LEFT);
mTextPaint.setTextSize(getResources().getDimensionPixelSize(R.dimen.canvas_text_size));
mTextPaint.setColor(Color.WHITE);
mOutlinePaint.setStyle(Paint.Style.STROKE);
mOutlinePaint.setStrokeWidth(mOutline);
mBubblePaint.setStyle(Paint.Style.FILL);
mSetPaints = true;
}
// Center of view
float x = canvas.getWidth() / 2;
float y = canvas.getHeight() / 2;
/*
* Uncomment line below to allow rotation of display around the center point
* based on the roll. However, this "feature" is not very intuitive, and requires
* locking device orientation to portrait or changes the sensor rotation matrix
* on device rotation. It's really quite a nightmare.
*/
//canvas.rotate((0.0f - mRoll), x, y);
float dy = mPitch * mViewportHeight;
if (mNearbyPlaces != null) {
//Log.d("OverlayDisplayView", "mNearbyPlaces: "+mNearbyPlaces.size());
// Iterate backwards to draw more distant places first
for (int i = mNearbyPlaces.size() - 1; i >= 0; i--) {
NearbyPlace nearbyPlace = mNearbyPlaces.get(i);
float xDegreesToTarget = mAzimuth - nearbyPlace.getBearingToPlace();
float dx = mViewportWidth * xDegreesToTarget;
float iconX = x - dx;
float iconY = y - dy;
if (isOverlapping(iconX, iconX).isOverlapped()) {
PointF point = calculateNewXY(new PointF(iconX, iconY + mViewportHeight));
iconX = point.x;
iconY = point.y;
}
nearbyPlace.setIconX(iconX);
nearbyPlace.setIconY(iconY);
Bitmap icon = getIcon(nearbyPlace.getIcon_id());
float width = icon.getWidth() + mTextPaint.measureText(nearbyPlace.getName()) + mMargin;
RectF recf=new RectF(iconX, iconY, width, icon.getHeight());
nearbyPlace.setRect(recf);
float angleToTarget = xDegreesToTarget;
if (xDegreesToTarget < 0) {
angleToTarget = 360 + xDegreesToTarget;
}
if (angleToTarget >= 0 && angleToTarget < 90) {
nearbyPlace.setQuadrant(1);
mQuad1Places.add(nearbyPlace);
} else if (angleToTarget >= 90 && angleToTarget < 180) {
nearbyPlace.setQuadrant(2);
mQuad2Places.add(nearbyPlace);
} else if (angleToTarget >= 180 && angleToTarget < 270) {
nearbyPlace.setQuadrant(3);
mQuad3Places.add(nearbyPlace);
} else {
nearbyPlace.setQuadrant(4);
mQuad4Places.add(nearbyPlace);
}
//Log.d("TAG", " - X: " + iconX + " y: " + iconY + " angle: " + angleToTarget + " display: " + nearbyPlace.getIcon_id());
}
drawQuadrant(mQuad1Places, canvas);
drawQuadrant(mQuad2Places, canvas);
drawQuadrant(mQuad3Places, canvas);
drawQuadrant(mQuad4Places, canvas);
}
}
It doesnot contain full code, but you may understand how pitch and azimuth with roll is used.. Best of luck

How to map Frame coordinates to Overlay in vision

I'm feeling that this Question is already solved many times, but I cannot figure it out. I was basically following this little Tutorial about mobile vision and completed it. After that I tried to detect Objects myself starting with a ColorBlob and drawing its borders.
The idea is to start in the middle of the frame (holding the object in the middle of the camera on purpose) and detecting the edges of that object by its color. It works as long as I hold the phone in landscape mode (Frame.ROTATION_0). As soon as I'm in Portrait mode (Frame.Rotation_90) the bounding Rect gets drawn rotated, so an object with more height gets drawn with more width, and also a bit off.
The docs say that a detector always delivers coords to an unrotated upright frame, so how am I supposed to calculate the bounding rectangle coords relative to its rotation?
I don't think it matters much, but here is how I find the color Rect
public Rect getBounds(Frame frame){
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
int scale = 50;
int scaleX = w / scale;
int scaleY = h / scale;
int midX = w / 2;
int midY = h / 2;
float ratio = 10.0
Rect mBoundary = new Rect();
float[] hsv = new float[3];
Bitmap bmp = frame.getBitmap();
int px = bmp.getPixel(midX, midY);
Color.colorToHSV(px, hsv);
Log.d(TAG, "detect: mid hsv: " + hsv[0] + ", " + hsv[1] + ", " + hsv[2]);
float hue = hsv[0];
float nhue;
int x, y;
for (x = midX + scaleX; x < w; x+=scaleX){
px = bmp.getPixel(x, midY);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.right = x
} else {
break;
}
}
for (x = midX - scaleX; x >= 0; x-= scaleX){
px = bmp.getPixel(x, midY);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.left = x
} else {
break;
}
}
for (y = midY + scaleY; y < h; y+=scaleY){
px = bmp.getPixel(midX, y);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.bottom = y;
} else {
break;
}
}
for (y = midY - scaleY; y >= 0; y-=scaleY){
px = bmp.getPixel(midX, y);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.top = y
} else {
break;
}
}
return mBoundary;
}
Then I simply draw it in the GraphicOverlay.Graphics draw method on the canvas. I already use the transformX/Y methods on the Graphic and thought, that it will also account for the rotation.
I also use the CameraSource and CameraSourcePreview class provided from the samples.

hitTest on children of a View - how to translate the MotionEvent coordinates

Being Android programming newbie I am trying to find out, if the user has touched a child (the yellow tile at the left in the picture below) at a custom View (source code: MyView.java):
For hitTesting I have written the following method:
private Drawable hitTest(int x, int y) {
for (Drawable tile: mTiles) {
Rect rect = tile.getBounds();
if (rect.contains(x, y))
return tile;
}
return null;
}
However I don't know, how to translate the MotionEvent coordinates, before passing them to the above method. I have tried many possible combinations, involving the mScale and mOffset properties of my custom View (I do not want to use View.scrollTo() and View.setScaleX() methods - but handle the offset and scale myself).
When I start randomly touching the screen, then I sometimes hit the tiles - so I see that my hitTest method is okay, but I just need to figure out the proper translation.
Do I miss something here, should I maybe add some translation between dp and real pixels?
public boolean onTouchEvent(MotionEvent e) {
Log.d("onToucheEvent", "mScale=" + mScale +
", mOffsetX=" + mOffsetX +
", mOffsetY=" + mOffsetY +
", e.getX()=" + e.getX() +
", e.getY()=" + e.getY() +
", e.getRawX()=" + e.getRawX() +
", e.getRawY()=" + e.getRawY()
);
int x = (int) (e.getX() / mScale - mOffsetX);
int y = (int) (e.getY() / mScale- mOffsetY);
Drawable tile = hitTest(x, y);
Log.d("onToucheEvent", "tile=" + tile);
boolean retVal = mScaleDetector.onTouchEvent(e);
retVal = mGestureDetector.onTouchEvent(e) || retVal;
return retVal || super.onTouchEvent(e);
}
UPDATE: Following pskink's advice (thanks) I am trying to use Matrix and have change my custom MyView.java to:
protected void onDraw(Canvas canvas) {
mMatrix.reset();
mMatrix.setTranslate(mOffsetX, mOffsetY);
mMatrix.postScale(mScale, mScale);
canvas.setMatrix(mMatrix);
mGameBoard.draw(canvas);
for (Drawable tile: mTiles) {
tile.draw(canvas);
}
}
public boolean onTouchEvent(MotionEvent e) {
float[] point = new float[] {e.getX(), e.getY()};
Matrix inverse = new Matrix();
mMatrix.invert(inverse);
inverse.mapPoints(point);
float density = getResources().getDisplayMetrics().density;
point[0] /= density;
point[1] /= density;
Drawable tile = hitTest((int) point[0], (int) point[1]);
Log.d("onToucheEvent", "tile=" + tile);
}
But unfortunately my hitTest() does not find any touched tiles.

Categories

Resources