I have a bitmap image that I'm trying to do a hit test on. The hit test works if its just a normal bitmap. But I need to rotate and scale the bitmap and I just can't seem to figure out the hit test properly.
x and y here are the cursor x and y. I need to check if the cursor (finger press) was clicked inside the manipulated bitmap. The scale seems to work fine, but the rotation doesn't seem to take affect.
float[] pts = new float[4];
float left = m.getX();
float top = m.getY();
float right = left + mBitmaps.get(i).getWidth();
float bottom = top + mBitmaps.get(i).getHeight();
pts[0] = left;
pts[1] = top;
pts[2] = right;
pts[3] = bottom;
float midx = left + mBitmaps.get(i).getWidth()/2;
float midy = top + mBitmaps.get(i).getHeight()/2;
Matrix matrix = new Matrix();
matrix.setRotate(m.getRotation(), midx, midy);
matrix.setScale(m.getSize(), m.getSize(), midx, midy);
matrix.mapPoints(pts);
if(x >= pts[0] && x <= pts[2] && y >= pts[1] && y <= pts[3])
{
return i;
}
Your test fails because after rotation the rectangle is no longer aligned to the coordinate axes.
A trick you can do is to transform the cursor position back with the inverse transformation matrix and then compare the transformed position with the original rectangle.
Matrix matrix = new Matrix();
matrix.setRotate(m.getRotation(), midx, midy);
matrix.postScale(m.getSize(), m.getSize(), midx, midy);
Matrix inverse = new Matrix();
matrix.invert(inverse);
pts[0] = x;
pts[1] = y;
inverse.mapPoints(pts);
if(pts[1] >= top && pts[1] <= bottom && pts[0] >= left && pts[0] <= right)
{
return i;
}
Related
Im really confuse that how can i draw professional brushes in android, im drawing circle using path when user moves its finger on screen but when user move its finger slow the number of circle increase and when user move finger fast the number of circle is very less, suppose user moves it finger very fast ther will be only 6 7 circle on that path but if user moves it finger slowly ther will be 30/40 or more circle on the path, which seems very buggy, is this is possible that moveing finger fast stores less points? but if i talk about line , the line on canvas draw prefectly while user moves it finger fast or slow, im sharing my code below
private void DrawCircleBrush(List<PointF> points) {
PointF p1 = points.get(0);
PointF p2 = points.get(1);
Path path = new Path();
path.moveTo(p1.x, p1.y);
for (int i = 1; i < points.size(); i++) {
int rc = (int) (20 +(this.paintStrokeWidth/5));
path.addCircle(p1.x, p1.y, (float) rc, Path.Direction.CCW);
}
this.invalidate();
}
I call DrawCircleBrush Fucnion on action_move like this
path.reset();
points.add(new PointF(x, y));
DrawCircleBrush(points);
You can see the difference of fast moving and slow moving finger in attached picture.
What i want to Achive you can see in this photo, as the brush draw same in this app when i move finger fast or slow,
Ok At last i find solution.
this is how im getting all the points , note that this a theorem called Bresenham's line algorithm and its only works with integer,
this is how im getting all the point , move finger fast or slow point will always be same :D
//x0,y0 , is the starting point and x1,y1 are current points
public List<PointF> findLine( int x0, int y0, int x1, int y1)
{
List<PointF> line = new ArrayList<PointF>();
int dx = Math.abs(x1 - x0);
int dy = Math.abs(y1 - y0);
int sx = x0 < x1 ? 1 : -1;
int sy = y0 < y1 ? 1 : -1;
int err = dx-dy;
int e2;
while (true)
{
line.add(new PointF(x0,y0));
if (x0 == x1 && y0 == y1)
break;
e2 = 2 * err;
if (e2 > -dy)
{
err = err - dy;
x0 = x0 + sx;
}
if (e2 < dx)
{
err = err + dx;
y0 = y0 + sy;
}
}
return line;
}
How im using this function for my brush,
//radius of circle
int rc = (int) (20 +(this.paintStrokeWidth/5));
//getting the points of line
List<PointF> pointFC =findLine((int)this.startX,(int) this.startY,(int) x,
(int) y);
//setting the index of first point
int p1 = 0;
//will check if change occur
boolean change = false;
for(int l=1; l<pointFC.size(); l++){
//getting distance between two pints
float d = distanceBetween(pointFC.get(p1),pointFC.get(l));
if(d>rc){
// we will add this point for draw
//point is a list of PointF //declared universally
points.add(new PointF(pointFC.get(l).x,pointFC.get(l).y));
we will change the index of last point
p1 = l-1;
change = true;
}
}
if(points.size() >0){
path.reset();
DrawCircleBrush(points);
}
if(change){
we will cahnge the starts points, //set them as last drawn points
this.startX = points.get(points.size()-1).x;
this.startY = points.get(points.size()-1).y;
}
//Distance betwenn points
private float distanceBetween(PointF point1,PointF point2) {
return (float) Math.sqrt(Math.pow(point2.x - point1.x, 2) +
Math.pow(point2.y - point1.y, 2));
}
//this is how im drawing my circle brush
private void DrawCircleBrush(List<PointF> points) {
Path path = this.getCurrentPath();
path.moveTo(points.get(0).x, points.get(0).y);
for (int i = 1; i < points.size(); i++) {
PointF pf = points.get(i);
int rc = (int) (20 +(this.paintStrokeWidth/5));
path.addCircle(pf.x, pf.y, (float) rc, Path.Direction.CCW);
}
}
Result: brush is same even move finger fast or slow
Check the "colored_pixels" from here
I'm feeling that this Question is already solved many times, but I cannot figure it out. I was basically following this little Tutorial about mobile vision and completed it. After that I tried to detect Objects myself starting with a ColorBlob and drawing its borders.
The idea is to start in the middle of the frame (holding the object in the middle of the camera on purpose) and detecting the edges of that object by its color. It works as long as I hold the phone in landscape mode (Frame.ROTATION_0). As soon as I'm in Portrait mode (Frame.Rotation_90) the bounding Rect gets drawn rotated, so an object with more height gets drawn with more width, and also a bit off.
The docs say that a detector always delivers coords to an unrotated upright frame, so how am I supposed to calculate the bounding rectangle coords relative to its rotation?
I don't think it matters much, but here is how I find the color Rect
public Rect getBounds(Frame frame){
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
int scale = 50;
int scaleX = w / scale;
int scaleY = h / scale;
int midX = w / 2;
int midY = h / 2;
float ratio = 10.0
Rect mBoundary = new Rect();
float[] hsv = new float[3];
Bitmap bmp = frame.getBitmap();
int px = bmp.getPixel(midX, midY);
Color.colorToHSV(px, hsv);
Log.d(TAG, "detect: mid hsv: " + hsv[0] + ", " + hsv[1] + ", " + hsv[2]);
float hue = hsv[0];
float nhue;
int x, y;
for (x = midX + scaleX; x < w; x+=scaleX){
px = bmp.getPixel(x, midY);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.right = x
} else {
break;
}
}
for (x = midX - scaleX; x >= 0; x-= scaleX){
px = bmp.getPixel(x, midY);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.left = x
} else {
break;
}
}
for (y = midY + scaleY; y < h; y+=scaleY){
px = bmp.getPixel(midX, y);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.bottom = y;
} else {
break;
}
}
for (y = midY - scaleY; y >= 0; y-=scaleY){
px = bmp.getPixel(midX, y);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.top = y
} else {
break;
}
}
return mBoundary;
}
Then I simply draw it in the GraphicOverlay.Graphics draw method on the canvas. I already use the transformX/Y methods on the Graphic and thought, that it will also account for the rotation.
I also use the CameraSource and CameraSourcePreview class provided from the samples.
I am Currently working on One 2D Android Game,
In this game One ViewObject(Bitmap) is moving Across Screen On Parabola Path Like in this Image, But this Path is Static, the Static path is getting throught the Drawing with Fingure on canvas,
As Same as signature Drawing.
The Bitmap Move code On this Static Path is
//animation step
private static int iMaxAnimationStep = 900;
private int iCurStep = 0;
private Path ptCurve = new Path(); //curve
private PathMeasure pm; //curve measure
private float fSegmentLen; //curve segment length
//init smooth curve
PointF point = aPoints.get(0);
ptCurve.moveTo(point.x, point.y);
for(int i = 0; i < aPoints.size() - 1; i++){
point = aPoints.get(i);
PointF next = aPoints.get(i+1);
ptCurve.quadTo(point.x, point.y, (next.x + point.x) / 2, (point.y + next.y) / 2);
}
pm = new PathMeasure(ptCurve, false);
fSegmentLen = pm.getLength() / iMaxAnimationStep;//20 animation steps
//animate the Bitmap
Matrix mxTransform = new Matrix();
if (iCurStep <= iMaxAnimationStep)
{
pm.getMatrix(fSegmentLen * iCurStep, mxTransform,
PathMeasure.POSITION_MATRIX_FLAG);
mxTransform.preTranslate(-Bitmap.getWidth(), -Bitmap.getHeight());
canvas.drawBitmap(Bitmap, mxTransform, null);
iCurStep++; //advance to the next step
mPauseViewHandler.post(mPauseViewRunnable);
} else {
iCurStep = 0;
}
But My Problem is I want to Move This ViewObject(Bitmap) On Dynamic Path(in parabola curve)
& that Dynamic curved path will work in Any Device.
I have searched Lot but i can't Find Solution How to get Dynamic Path (in parabola curve).
help! If you have Any Solution,Suggestion, idea ,tutorial regarding this post is Mostly Appreciated.
It's simple enough to fill aPoints array based on your screen size, and get a parabolic path based on those points. I've removed all your bitmap/animation code, this code below will calculate the path and draw it on the screen.
We need a new variable to set how many curves we want in the screen. If you prefer it's easy to change the math and define the size of the curve instead.
private int numberOfCurves = 5;
With that it's simple to calculate 3 points for each parabola:
public void calculatePoints(){
float w = v.getWidth(); //Screen width
float h = v.getHeight(); //Screen height
float curveSize = w/numberOfCurves; // Curve size
float curveHeight = (h/100) * 20; //80% of the screen size
Log.d(TAG,"h:"+h +" - w:" + w);
float lastX = 0; //last used X coordinate
for (int i=0;i<numberOfCurves;i++){ //for each curve we'll need 3 points
float newX = lastX + curveSize;
PointF p = new PointF(lastX, h); //first point is the last point
PointF p1 = new PointF((lastX + newX)/2, curveHeight); //the middle point is halfway between the last and the new point
PointF p2 = new PointF(newX,h); // the new point is last point + the size of our curve
aPoints.add(p); //fill in the array
aPoints.add(p1);
aPoints.add(p2);
lastX = newX; //update last point
}
//log our points
for (PointF p : aPoints){
Log.d(TAG,p.x +"-"+p.y);
}
}
Now we have a set of points defining each parabola, we need to draw it. Instead of using quadTo, use cubicTo. It takes 3 points and draws a curve connecting them. Put it onDraw, and you have your parabolas drawn on the screen.
private Path ptCurve = new Path(); //curve
#Override
public void onDraw(Canvas canvas) {
calculatePoints();
Log.d(TAG,"DRAWING");
PointF point = aPoints.get(0);
ptCurve.moveTo(point.x, point.y);
for(int i = 0; i < aPoints.size() - 1; i+=3){
point = aPoints.get(i);
PointF middle = aPoints.get(i+1);
PointF next = aPoints.get(i+2);
ptCurve.cubicTo(point.x, point.y, middle.x,middle.y, next.x , next.y);
}
canvas.drawPath(ptCurve, paint);
}
So your ptCurve variable is now filled with a parabolic path, with as many curves as you've defined earlier, and it will work on any screen size.
I am working on a drawing app and allows users to import image to further draw on it. Images bigger than the drawing area will then be scaled down such that to meet the max screenwidth or screenheight.
The imported image would be placed at center of the drawingView using canvas.drawBitmap(bitmap, x_adjustment, y_adjustment, paintScreen);
In this way there would be blank spaces on left, right or top, down of the imported image. The adjustment would be counting from (0,0)
x_adjustment and y_adjustment
Coding:
onDraw
#Override
protected void onDraw(Canvas canvas)
{
canvas.drawBitmap(bitmap, x_adjustment, y_adjustment, paintScreen);
for (Integer key : pathMap.keySet())
canvas.drawPath(pathMap.get(key), paintLine); // draw line
}
touchStarted:
private void touchStarted(float x, float y, int lineID)
{
Path path; // used to store the path for the given touch id
Point point; // used to store the last point in path
path = new Path(); // create a new Path
pathMap.put(lineID, path); // add the Path to Map
point = new Point();
previousPointMap.put(lineID, point);
path.moveTo(x, y);
point.x = (int) x;
point.y = (int) y;
}
touchMoved:
// called when the user drags along the screen
private void touchMoved(MotionEvent event)
{
// for each of the pointers in the given MotionEvent
for (int i = 0; i < event.getPointerCount(); i++)
{
// get the pointer ID and pointer index
int pointerID = event.getPointerId(i);
int pointerIndex = event.findPointerIndex(pointerID);
if (pathMap.containsKey(pointerID))
{
// get the new coordinates for the pointer
float newX = event.getX(pointerIndex);
float newY = event.getY(pointerIndex);
// get the Path and previous Point associated with this pointer
Path path = pathMap.get(pointerID);
Point point = previousPointMap.get(pointerID);
float deltaX = Math.abs(newX - point.x);
float deltaY = Math.abs(newY - point.y);
if (deltaX >= TOUCH_TOLERANCE || deltaY >= TOUCH_TOLERANCE)
{
path.quadTo(point.x, point.y, ((newX + point.x)/2),((newY + point.y)/2));
// store the new coordinates
point.x = (int) newX ;
point.y = (int) newY ;
}
}
}
}
touchEnded:
private void touchEnded(int lineID)
{
Path path = pathMap.get(lineID);
bitmapCanvas.drawPath(path, paintLine);
path.reset();
}
Question:
Since the imported image is placed at center but not (0,0), for every line drawn, though when it is showing properly when it is drawing and screen-touching, when the user removes the finger, i.e. Touch ended, the finalized line would be shifted by x_adjustment and y_adjustment.
e.g. If the scaled image width < screenwidth, there are blank space on left and right, when drawing the line is showing correctly, yet when the finger is removed, the line will wrongly immediately shift to right by x_adjustment;
I know it is because of the adjustments that make the error. I know it is to save the path's coordinates by a x,y shift. But I dont know how to modify for the codes and I have tried to add adjustment to the paths but still fails. Could anybody be kindly help to give me some guides? Many thanks!
Im using below code to draw line on bitmap canvas while finger touch move... here i posted partial code and it is working fine..
As shown in below image, the black and white bitmap erased on touch drag.. I made canvas transparent so the parent layout background(color image) is getting visible.
I want to know , how much area is erased(like 50% or 60% of bitmap ).. is there any way to find that?
//Erasing paint
mDrawPaint = new Paint();
mDrawPaint.setAntiAlias(true);
mDrawPaint.setDither(true);
mDrawPaint.setStyle(Paint.Style.STROKE);
mDrawPaint.setStrokeJoin(Paint.Join.ROUND);
mDrawPaint.setStrokeCap(Paint.Cap.ROUND);
mDrawPaint.setStrokeWidth(50);
mDrawPaint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
BlurMaskFilter mBlur = new BlurMaskFilter(10, BlurMaskFilter.Blur.NORMAL);
mDrawPaint.setMaskFilter(mBlur);
private void doDraw(Canvas c) {
c.drawBitmap(mBitmap, 0, 0,null );
}
private float mX, mY;
private static final float TOUCH_TOLERANCE = 1;
void touch_start(float x, float y) {
mPath.reset();
mPath.moveTo(x, y);
mX = x;
mY = y;
}
void touch_move(float x, float y) {
float dx = Math.abs(x - mX);
float dy = Math.abs(y - mY);
if (dx >= TOUCH_TOLERANCE || dy >= TOUCH_TOLERANCE) {
mPath.quadTo(mX, mY, (x + mX)/2, (y + mY)/2);
mX = x;
mY = y;
}
canvas.drawPath(mPath, mDrawPaint ); //Erasing Black and white image
}
void touch_up() {
mPath.lineTo(mX, mY);
// commit the path to our offscreen
mCanvas.drawPath(mPath, mDrawPaint);
// kill this so we don't double draw
mPath.reset();
}
Try to use Monte Carlo method to estimate percentage of transparent area. I think it is a fastest and easiest way to do this. Take about 50 (depends on accuracy you need) random pixels on your transparency mask and check their color. Then calc ans = TransparentPixelsCount/TestPixelCount.
It is very hard to calculate square of user's drawings using path coordinates. And it's quite long to iterate over all pixels. So, IMHO Monte Carlo is your choise.
To get an exact (and slow) answer, you need to inspect every pixel and count the number are transparent and divide by the total number of pixels. If your requirements allow for some estimation, it is probably best to sample the image.
You could downsize the image and run and the above procedure on the smaller image. That has the disadvantage that the scaling operation might be going through all the pixels making it slow. I would recommend a grid sampling, it is similar to downsizing, but skips over pixels. Basically, we evenly space x sample points on a grid over the image. Then count the number of sample points that are transparent. The estimate of transparent percentage is the total transparent samples/number of transparent samples. You can get reasonable accuracy (usually within 5%) with a small number, say 100, samples. Here is a code function that implements this method -- bm is the Bitmap and scale is the number of samples per axis, so setting scale = 10 gives 100 total samples (10x10 sampling grid over the image).
static public float percentTransparent(Bitmap bm, int scale) {
final int width = bm.getWidth();
final int height = bm.getHeight();
// size of sample rectangles
final int xStep = width/scale;
final int yStep = height/scale;
// center of the first rectangle
final int xInit = xStep/2;
final int yInit = yStep/2;
// center of the last rectangle
final int xEnd = width - xStep/2;
final int yEnd = height - yStep/2;
int totalTransparent = 0;
for(int x = xInit; x <= xEnd; x += xStep) {
for(int y = yInit; y <= yEnd; y += yStep) {
if (bm.getPixel(x, y) == Color.TRANSPARENT) {
totalTransparent++;
}
}
}
return ((float)totalTransparent)/(scale * scale);
}
For reference, the slow method that would give you the results by counting every pixel is below. It can be used for reference on testing the above estimator.
static public float percentTransparent(Bitmap bm) {
final int width = bm.getWidth();
final int height = bm.getHeight();
int totalTransparent = 0;
for(int x = 0; x < width; x++) {
for(int y = 0; y < height; y++) {
if (bm.getPixel(x, y) == Color.TRANSPARENT) {
totalTransparent++;
}
}
}
return ((float)totalTransparent)/(width * height);
}
A different approach on this: you can calculate the size of each path using ComputeBounds. Then it should be simple to compare this with the size of your view and decide the % of the drawing.
Jus you need to keep in mind that the path can be drawn over itself, so you need to be careful and handle that in the calculation.
Store all point x and y value in two different sorted sets, one for x value of point and other for y value of point.
The final value of your bound will be point(min_x,min_y) and point(max_x,max_y).
You need to detect the points lying inside the drawn polygon.
Here is the functions which takes array that contains all the drawn point, and second parameter are the points itself i.e. x ,y.
// Return true if the dot { x,y } is within any of the polygons in the list
function pointInPolygons( polygons, dot )
for (i=1, [polygons count] i++)
{
if (pointInPolygon( polygons[i], dot ))
return true
}
return false
end
// Returns true if the dot { x,y } is within the polygon
//defined by points table { {x,y},- --{x,y},{x,y},... }
function pointInPolygon( points, dot )
local i, j = #points, #points
local oddNodes = false
for i=1, #points do
if ((points[i].y < dot.y and points[j].y>=dot.y
or points[j].y< dot.y and points[i].y>=dot.y) and (points[i].x<=dot.x
or points[j].x<=dot.x)) then
if (points[i].x+(dot.y-points[i].y)/(points[j].y-points[i].y)*(points[j].x-points[i].x)<dot.x) then
oddNodes = not oddNodes
end
end
j = i
end
return oddNodes
end