I have a simple drawing app where I have a picture of a letter in the back filling an ImageView set to 45% of the screen's height. I have a JSON file that stores points along the letter. I'm trying to display those points on top of the picture of the letter.
Those points' coordinates range between y = -440 and y = 200. In order to properly display the points I need the top and the height of the imageView containing the letter to map the points onto the screen. I have to map the points to the proper points in runtime because with different screen sizes need different scale to display the points properly.
This is what it should look like (this is with a phone specific correction factor):
This is what it actually looks like:
I render the drawing via a Canvas that I paint the points onto. I'm pretty sure the problem lies in how I'm getting the top of the ImageView.
Here's what I've tried:
//1
float y = view.getTop();
//2
final int[] screenPos = new int[2];
iv.getLocationOnScreen(screenPos);
float y = screenPos[1];
//3
float offset = iv.getTop() - screenPos[1];
float y = iv.getTop() + offset;
Is there something I'm supposed to be doing that I'm not? Is there a better way the relative returns of getTop()? Help.
It will give you the position relative to the screen
Rect rect = new Rect();
view.getGlobalVisibleRect(rect);
float top = rect.top;
//there is also rect.bottom, rect.width, rect.height etc.
Related
I have tried various methods but I want to get bitmap's coordinates when the activity loads so that I can use it to set polygon view.
I have tried using the imageview width and height but the polygon views occupy all of the screen I want the polygon view to be restricted to bit map for that I need the bitmap coordinates.
I want the (x,y) coordinates which are written in blue as depicted below.any help would be appreciated.the image
If I understood you Q correctly then you need a mathematical sol here to get x,y positions. 1st use bitmap to be set as ImageView.ScaleType= CENTER_INSIDE.
So, your image position is fixed & will touch either X or Y axis.
Calculate ratios:
Br(BitmapRatio) = Bw (bitmapWidth) / Bh (bitmapHeight)
Ir(ImageViewRatio) = Iw / Ih
Now, use below formula:
if(Ir > Br) {
y = 0;
x = Br*Ih/2;
} else {
x = 0;
y = (Iw/Br)/2;
}
In my Android app, I am capturing a screenshot programmatically from a background service. I obtain it as a Bitmap.
Next, I obtain the co-ordinates of a region of interest (ROI) with the following Android framework API:
Rect ROI = new Rect();
viewNode.getBoundsInScreen(ROI);
Here, getBoundsInScreen() is the Android equivalent of the Javascript function getBoundingClientRect().
A Rect in Android has the following properties:
rect.top
rect.left
rect.right
rect.bottom
rect.height()
rect.width()
rect.centerX() /* rounded off to integer */
rect.centerY()
rect.exactCenterX() /* exact value in float */
rect.exactCenterY()
What does top, left, right and bottom mean in Android Rect
object
Whereas a Rect in OpenCV has the following properties
rect.width
rect.height
rect.x /* x coordinate of the top-left corner */
rect.y /* y coordinate of the top-left corner */
Now before we can perform any OpenCV-related operations, we need to transform the Android Rect to an OpenCV Rect.
Understanding how actually drawRect or drawing coordinates work in Android
There are two ways to convert an Android Rect to an OpenCV Rect (as suggested by Karl Phillip in his answer). Both generate the same values and both produce the same result:
/* Compute the top-left corner using the center point of the rectangle. */
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);
// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;
int w = androidRect.width();
int h = androidRect.height();
org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);
Now one of the OpenCV operations I am performing is blurring the ROI within the screenshot:
Mat originalMat = new Mat();
Bitmap configuredBitmap32 = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(configuredBitmap32, originalMat);
Mat ROIMat = originalMat.submat(roi).clone();
Imgproc.GaussianBlur(ROIMat, ROIMat, new org.opencv.core.Size(0, 0), 5, 5);
ROIMat.copyTo(originalMat.submat(roi));
Bitmap blurredBitmap = Bitmap.createBitmap(originalMat.cols(), originalMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(originalMat, blurredBitmap);
This brings us very close to the desired result. Almost there, but not
quite. The area just BENEATH the targeted region is blurred.
For example, if the targeted region of interest is a password field, the above code produces the following results:
On the left, Microsoft Live ROI, and on the right, Pinterest ROI:
As can be seen, the area just below the ROI gets blurred.
So my question is, finally, why isn't the exact region of interest
blurred?
The co-ordinates obtained through the Android API getBoundsInScreen() appear to be correct.
Converting an Android Rect to an OpenCV Rect also appears to be correct. Or is it?
The code for blurring a region of interest also appears to be correct. Is there another way to do the same thing?
N.B: I've provided the actual, full-size screenshots as I am getting them. They have been scaled down by 50% to fit in this post, but other than that they are exactly as I am getting them on the Android device.
If I'm not mistaken, OpenCV's Rect assumes that x and y specify the top left corner of the rectangle:
/* Compute the top-left corner using the center point of the rectangle
* TODO: take care of float to int conversion
*/
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);
// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;
int w = androidRect.width();
int h = androidRect.height();
org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);
As per ScreenShots the value you get fot rect.x is not same for opencv rect.
Because android rect center x value get from pixel density of screen while opencv rect value react image pixel row and column.
if u find height of image and total rows of original mat both are different,but for perfect place rect they should be same so u have to multiply distance with some constant value to get accurate distance of rect.
I am currently working with onDraw(); and custom shapes.
What I am trying to do here is to draw 3 lines on the blue rectangle below:
And display a person's speed range settings and their current speed by drawing 3 lines. I am planning to do the speed range through getting the location, the width and the height of the rectangle and then dividing this up by the range set by the user.
However, I cannot find a resource that allows me to get the location, width and height of the blue rectangle.
Is there any way to achieve this, or do I simply have to get it from the source XML?
for getting the location of view related to parent view:
float x = view.getX();
float y = view.getY();
for getting the location of view related to Screen:
int[] location = new int[2];
view.getLocationOnScreen(location);
int x = location[0];
int y = location[1];
I'm trying to make a custom view with clickeable areas for my app. Those areas are relative to the image pixel coords that will fill that view. I've placed those images at drawable-nodpi to avoid system scaling.
My custom view takes one of those images, resizes it keeping the aspect ratio to fit its parent and then resizes the view to the size of the resulting image. So at this point I have a view that maintains the ratio of the source, so the resulting view click (onTouch event.getX and event.getY) coordinates are relative to the original image pixel coords.
From the other hand I have all the coordinates of the shapes that define the clickeable areas in a xml file wich I load when my activity starts. Those areas are defined by a type: circle or rect.
circle: center x-y and radius in px according to the original image
rect: center x-y, width and height in px according to the original image
Now I need to detect if my touch x-y is inside of the coordinates of any of those areas, but keeping in mind the scaling that my original image suffered.
How could I detect the "collitions" between my touch coordinates and the clickeable areas coords? I mean how do I calculate that even without resizing my original image?
I have made a View like this myself,
i added objects containing an image and x/y coords.
Now u need to have a list of those Objects, and in case you get an ontouchEvent, you iterate over that list do something like objectHit()
public boolean objectHit(int x, int y){
int touchdistance = Math.sqrt((double)(this.getX()-x)*(double)(this.getX()-x)) + ((double)(this.getY()-y)*(double)(this.getY()-y));
return touchdistance <= this.getTouchableArea();
}
And you implement getTouchableArea for the Object basicly the same way.
public double getTouchAbleArea() {
return Math.sqrt(Math.pow(getBitmap().getHeight(),2)+Math.pow(getBitmap().getWidth(),2))/2;
}
So what you are doing with this code is, you determine if the touch is within the size of the Image representing the object.
This is what I ended up doing
for(i=0;i<level.getDiffs();i++){
DifferencesData diff = level.getDifference(i);
if(!diff.getFinded()){
x = diff.getX();
y = diff.getY();
if(diff.getType() == 0){
double d = Math.sqrt(Math.pow(x - event.getX(),2) + Math.pow(y - event.getY(),2));
if(d <= diff.getRadius()){
hit = true;
break;
}
}else{
double dx = Math.sqrt(Math.pow(x - event.getX(),2));
double dy = Math.sqrt(Math.pow(y - event.getY(),2));
if(dx <= (diff.getWidth() / 2) && dy <= (diff.getHeight() / 2)){
hit = true;
break;
}
}
}
}
First I scaled the original coordinates by the same scale that my image was scaled by. Then, inside an OnTouchListener I calculated the distance of my touch to the ratio of the circle, or to the half width and half height of my rectangles.
Thank you Daniel for your help!
I have an image view which is contained within a relative layout. I am trying to get the screen y coordinates of the top of the image view and the screen y coordinates of the bottom of the image view. I have tried this:
float yMin = (float)levelH.getTop();
float yMax = (float)levelH.getBottom();
float yMin seems almost correct. I am translating another image view (IM2) up and down this image view(IM1). So I am trying to set a limit on how far (IM2) can translate up and down. So my thinking was to get the y top and bottom of (IM1) I can set those as max and min.
Anyone know how to do this?
ps Im using android accelometer to move (IM2)
getTop() ansd getBottom() look at coordinates within it's parent. To get coordinates of it's position on the screen you can use getLocationOnScreen
Use it like this:
int[] coords = {0,0};
view.getLocationOnScreen(coords);
int absoluteTop = coords[1];
int absoluteBottom = coords[1] + view.getHeight();
Use View.getLocationOnScreen() and/or getLocationInWindow()