How to calculate the same position on different bitmaps? - android

I have 2 photos, one has a dimension of 300x300 and the other one is 1200x1200.
I drew one text to the position A = (50, 40) in the 300x300 image.
How can I calculate the same position A on the 1200x1200 image?
UPDATE 2:
IF dimension is not round (such as 523 x 412...) - x, y after multiply will be deflected

you may go with relative position calculation as follows.
AAx = (50/300)*1200;
AAy = (50/300)*1200;
so your new position will be AA = (200,200)

The scaling factor for both x and y is 1200/300 = 4.
Then, simply multiply both x and y by 4 (your scaling factor).
int scaleFactor = 1200 / 300;
int newX = oldX * scaleFactor;
int newY = oldY * scaleFactor;
So, given that oldX = 50 and oldY = 40, the expectex values for newX and newY are 200 and 160, respectively.

Related

The algorithm rotation of YUV_420_888 format in react-native-camera

Below is the rotation code from react-native-camera to support ZXING library detect barcode. Specificed link is here
private byte[] rotateImage(byte[] imageData, int width, int height) {
byte[] rotated = new byte[imageData.length];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
rotated[x * height + height - y - 1] = imageData[x + y * width];
}
}
return rotated;
}
imageData is YUV_420_888 format
I know it's rotate a frame, but how it is really work? Is it rotate 90 or 180 degree? In clockwise or anticlockwise direction?
I'm struggle to test it with sample images I put in so completely dont understand it.
The code that you have posted rotates a 1 byte per pixel monochrome (grey-scale) image 90 degrees clockwise and returns it in a new byte array. It doesn't process any chroma information.
The YUV_420_888 image format stores an image in YUV format, where Y is the luma (grey-scale component) which is stored first in memory, and U and V are the chroma components which are stored after the luma. To save space, U and V are stored at half the horizontal and vertical resolution of the luma component.
Because the luma component is stored first, if you just ignore the chroma channels that come after it, you can treat it as a monochrome image, which is what the code is doing.
To do the actual rotation, the code is iterating over all the pixels in y and x. For each pixel, it calculates the new pixel location in the rotated image and copies it there.
Here is a badly-drawn diagram of what's happening:
The YUV_420_888 stores the pixels one row at a time, top-to-bottom, left-to-right. So the math to compute a pixel location is like this:
old_pixel_location = (y * width) + x
As you can see in the image, the old image width becomes the new image height and vice versa. The pixel position in the rotated image has a new_y value equal to the x value, and a new_x value which is y pixels to the left of the right side of the image.
new_width = height
new_height = width
new_x = (new_width - 1) - y
new_y = x
The new pixel position is:
new_pixel_location = (new_y * new_width) + new_x
// substituting gives:
new_x = (height - 1) - y
new_pixel_location = (x * height) + ((height - 1) - y)
// removing brackets and re-ordering:
old_pixel_location = x + y * width
new_pixel_location = x * height + height - y - 1

How to zoom at particular XY coordinate(points) in ImageView

I have used this PhotoView library for custom ImageView. I want to scale the image at particular point. Here is the method I found is setScale(float scale, float focalX, float focalY, boolean animate)
I am wondering what can I pass a value of focalX and focalY , I have X and Y coordinate which I am passing currently and it scales to very different position.
Here is a snippet,
intResultX = intTotalX / intArraySize;
intResultY = intTotalY / intArraySize;
mMap.setScale(5, intResultX, intResultY, true);
To zoom at particular XY coordinate in Imageview you can pass a value of focalX and focalY along with scale (must be between max scale an min scale of PhotoView) and boolean value to set animation.
Code to get max-min scales:
mPhotoView.getMinimumScale();
mPhotoView.getMaximumScale();
focalX and focalY It can be any points on screen, here I have taken two examples one is center of the screen and other is top-left corner. following is the code for both cases.
Code:
Random r = new Random();
float minScale = mPhotoView.getMinimumScale();
float maxScale = mPhotoView.getMaximumScale();
float randomScale = minScale + (r.nextFloat() * (maxScale - minScale));
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
int height = displayMetrics.heightPixels;
int width = displayMetrics.widthPixels;
int centerX=width/2;
int centerY =height/2;
/*pass a value of focalX and focalY to scale image to center*/
//mPhotoView.setScale(randomScale, centerX, centerY, true);
/*pass a value of focalX and focalY to scale image to top left corner*/
mPhotoView.setScale(randomScale, 0, 0, true);
Set zoom to the specified scale. Image will be centered around the point
(focusX, focusY). These floats range from 0 to 1 and denote the focus point
as a fraction from the left and top of the view. For example, the top left
corner of the image would be (0, 0). And the bottom right corner would be (1, 1).
public void setZoom(float scale, float focusX, float focusY, ScaleType scaleType) {
/*setZoom can be called before the image is on the screen, but at this point,
image and view sizes have not yet been calculated in onMeasure. Thus, we should
delay calling setZoom until the view has been measured.*/
if (!onDrawReady) {
delayedZoomVariables = new ZoomVariables(scale, focusX, focusY, scaleType);
return;
}
if (scaleType != mScaleType) {
setScaleType(scaleType);
}
resetZoom();
scaleImage(scale, viewWidth / 2, viewHeight / 2, true);
matrix.getValues(m);
m[Matrix.MTRANS_X] = -((focusX * getImageWidth()) - (viewWidth * 0.5f));
m[Matrix.MTRANS_Y] = -((focusY * getImageHeight()) - (viewHeight * 0.5f));
matrix.setValues(m);
fixTrans();
setImageMatrix(matrix);
}
Hope this helps. Happy coding.

Drawing stripes in a flag math formula

I have a rectangle with known size and position. (flag)
I have to fill this rectangle with 4 other rectangles. (stripes)
Each stripe must have 1/4 of the total width of the flag and his position is near the previous.
I have to draw this stripes with a random angle that goes from 0° to 90°.
0° = Vertical stripes (stripe width = flag width / 4)
90° = Horizontal stripes (stripe width = flag height / 4)
How can I calculate the width of each stripe for other angles?
int stripes = 4;
RectF rect = new RectF(0, 0, 100f, 75f);
float angle = new Random.nextInt(90);
float stripeSize;
if (angle == 0) {
stripeSize = rect.width() / stripes;
} else if (angle == 90) {
stripeSize = rect.height() / stripes;
} else {
stripeSize = ?
}
canvas.save();
canvas.rotate(angle, rect.centerX(), rect.centerY());
float offset = 0;
for (int i = 0; i < stripes; i++) {
if (angle == 0) {
reusableRect.set(offset, rect.top, offset + stripeSize, rect.bottom);
} else if (angle == 90) {
reusableRect.set(rect.left, offset, rect.right, offset + stripeSize);
} else {
reusableRect.set(?, ?, ?, ?);
}
canvas.drawRect(reusableRect, paint);
offset += stripeSize;
}
canvas.restore();
Let's pretend you have one stripe. Depending on the angle, the stripe width is going to be a value between the shorter dimension (the height in your case) and the longer dimension (the width in your case). The formula for the stripe width calculation should look something like this:
height + ((width - height) * ?)
where ? varies between 0 and 1 based on the angle of rotation. To me that sounds like the sine function might be a good candidate: sine(0) = 0 and sine(90) = 1. You can use Math.sin(), but be aware that the argument it takes is in radians, not degrees, so you need to use Math.toRadians() on your angle first. Then just divide by the number of stripes:
double radians = Math.toRadians(angle);
float stripeTotal = height + ((width - height) * Math.sin(radians));
float stripeWidth = stripeTotal / 4; // or however many stripes you have
If it's not perfect, you can adjust the formula. One last point, since these values only need to be calculated once, I would do that separately every time the angle changes (if it ever changes), not inside of onDraw().

Getting the Bitmap coordinates contained in a View (Android) [duplicate]

In my app I need to let users to check the eyes at some photo.
In OnTouchListener.onTouch(...) I get the coordinates of the ImageView.
How can I convert this coordinates to the point at the bitmap that was touched?
this works for me at least with API 10+:
final float[] getPointerCoords(ImageView view, MotionEvent e)
{
final int index = e.getActionIndex();
final float[] coords = new float[] { e.getX(index), e.getY(index) };
Matrix matrix = new Matrix();
view.getImageMatrix().invert(matrix);
matrix.postTranslate(view.getScrollX(), view.getScrollY());
matrix.mapPoints(coords);
return coords;
}
Okay, so I've not tried this, but giving it a bit of thought, here's what I've got as a suggestion:
ImageView imageView = (ImageView)findViewById(R.id.imageview);
Drawable drawable = imageView.getDrawable();
Rect imageBounds = drawable.getBounds();
//original height and width of the bitmap
int intrinsicHeight = drawable.getIntrinsicHeight();
int intrinsicWidth = drawable.getIntrinsicWidth();
//height and width of the visible (scaled) image
int scaledHeight = imageBounds.height();
int scaledWidth = imageBounds.width();
//Find the ratio of the original image to the scaled image
//Should normally be equal unless a disproportionate scaling
//(e.g. fitXY) is used.
float heightRatio = intrinsicHeight / scaledHeight;
float widthRatio = intrinsicWidth / scaledWidth;
//do whatever magic to get your touch point
//MotionEvent event;
//get the distance from the left and top of the image bounds
int scaledImageOffsetX = event.getX() - imageBounds.left;
int scaledImageOffsetY = event.getY() - imageBounds.top;
//scale these distances according to the ratio of your scaling
//For example, if the original image is 1.5x the size of the scaled
//image, and your offset is (10, 20), your original image offset
//values should be (15, 30).
int originalImageOffsetX = scaledImageOffsetX * widthRatio;
int originalImageOffsetY = scaledImageOffsetY * heightRatio;
Give this idea a try and see if it works for you.
besides considering the offset due to padding (margin is part of the layout, it's space outside the view and doesn't have to be considered), if the image is scaled you can get the image matrix (ImageView.getImageMatrix()) to scale coordinates.
EDIT:
You can get x/y scaling factor and translation amount getting the values array and using respective index constants:
float[] values;
matrix.getValues(values);
float xScale = values[Matrix.MSCALE_X];
note that translation doesn't include padding, you still would have to consider that separately. translation is used for instance in FIT_CENTER scaling when there's some "blank" space.
I'd say you probably need to offset the coordinates from the ImageView with any padding or margins in the layout to get the correct coordinates of the BitMap.
To add to kcoppock's answer, I just want to add that:
//original height and width of the bitmap
int intrinsicHeight = drawable.getIntrinsicHeight();
int intrinsicWidth = drawable.getIntrinsicWidth();
may return an answer you're not expecting. These values depend on the dpi of the drawable folder you load the image from. For instance, you might get a different value if you load the image from /drawable vs /drawable-hdpi vs /drawable-ldpi.
Get floor Width and height
float floorWidth = floorImage.getWidth();
float floorHeight = floorImage.getHeight();
Calculate protionate value
float proportionateWidth = bitmapWidth / floorWidth;
float proportionateHeight = bitmapHeight / floorHeight;
Your X & Y
float x = 315;
float y = 119;
Multiple with PropotionateValue
x = x * proportionateWidth;
y = y * proportionateHeight;
As I came accross this question and tried it out myself, here is my solution.
It seems to work with stretched and centered images.
class MyEditableImageView(context: Context, attrs: AttributeSet) :
androidx.appcompat.widget.AppCompatImageView(context, attrs) {
override fun onTouchEvent(event: MotionEvent): Boolean {
val image = drawable.toBitmap().copy(Bitmap.Config.ARGB_8888, true)
val xp = (event.x - imageMatrix.values()[Matrix.MTRANS_X]) / imageMatrix.values()[Matrix.MSCALE_X]
val yp = (event.y - imageMatrix.values()[Matrix.MTRANS_Y]) / imageMatrix.values()[Matrix.MSCALE_Y]
if (xp >= 0 && xp < image.width && yp >= 0 && yp < image.height) {
doSomethingOnImage(image, xp, yp)
setImageBitmap(image)
}
return super.onTouchEvent(event)
}
...
}

How to convert coordinates of the image view to the coordinates of the bitmap?

In my app I need to let users to check the eyes at some photo.
In OnTouchListener.onTouch(...) I get the coordinates of the ImageView.
How can I convert this coordinates to the point at the bitmap that was touched?
this works for me at least with API 10+:
final float[] getPointerCoords(ImageView view, MotionEvent e)
{
final int index = e.getActionIndex();
final float[] coords = new float[] { e.getX(index), e.getY(index) };
Matrix matrix = new Matrix();
view.getImageMatrix().invert(matrix);
matrix.postTranslate(view.getScrollX(), view.getScrollY());
matrix.mapPoints(coords);
return coords;
}
Okay, so I've not tried this, but giving it a bit of thought, here's what I've got as a suggestion:
ImageView imageView = (ImageView)findViewById(R.id.imageview);
Drawable drawable = imageView.getDrawable();
Rect imageBounds = drawable.getBounds();
//original height and width of the bitmap
int intrinsicHeight = drawable.getIntrinsicHeight();
int intrinsicWidth = drawable.getIntrinsicWidth();
//height and width of the visible (scaled) image
int scaledHeight = imageBounds.height();
int scaledWidth = imageBounds.width();
//Find the ratio of the original image to the scaled image
//Should normally be equal unless a disproportionate scaling
//(e.g. fitXY) is used.
float heightRatio = intrinsicHeight / scaledHeight;
float widthRatio = intrinsicWidth / scaledWidth;
//do whatever magic to get your touch point
//MotionEvent event;
//get the distance from the left and top of the image bounds
int scaledImageOffsetX = event.getX() - imageBounds.left;
int scaledImageOffsetY = event.getY() - imageBounds.top;
//scale these distances according to the ratio of your scaling
//For example, if the original image is 1.5x the size of the scaled
//image, and your offset is (10, 20), your original image offset
//values should be (15, 30).
int originalImageOffsetX = scaledImageOffsetX * widthRatio;
int originalImageOffsetY = scaledImageOffsetY * heightRatio;
Give this idea a try and see if it works for you.
besides considering the offset due to padding (margin is part of the layout, it's space outside the view and doesn't have to be considered), if the image is scaled you can get the image matrix (ImageView.getImageMatrix()) to scale coordinates.
EDIT:
You can get x/y scaling factor and translation amount getting the values array and using respective index constants:
float[] values;
matrix.getValues(values);
float xScale = values[Matrix.MSCALE_X];
note that translation doesn't include padding, you still would have to consider that separately. translation is used for instance in FIT_CENTER scaling when there's some "blank" space.
I'd say you probably need to offset the coordinates from the ImageView with any padding or margins in the layout to get the correct coordinates of the BitMap.
To add to kcoppock's answer, I just want to add that:
//original height and width of the bitmap
int intrinsicHeight = drawable.getIntrinsicHeight();
int intrinsicWidth = drawable.getIntrinsicWidth();
may return an answer you're not expecting. These values depend on the dpi of the drawable folder you load the image from. For instance, you might get a different value if you load the image from /drawable vs /drawable-hdpi vs /drawable-ldpi.
Get floor Width and height
float floorWidth = floorImage.getWidth();
float floorHeight = floorImage.getHeight();
Calculate protionate value
float proportionateWidth = bitmapWidth / floorWidth;
float proportionateHeight = bitmapHeight / floorHeight;
Your X & Y
float x = 315;
float y = 119;
Multiple with PropotionateValue
x = x * proportionateWidth;
y = y * proportionateHeight;
As I came accross this question and tried it out myself, here is my solution.
It seems to work with stretched and centered images.
class MyEditableImageView(context: Context, attrs: AttributeSet) :
androidx.appcompat.widget.AppCompatImageView(context, attrs) {
override fun onTouchEvent(event: MotionEvent): Boolean {
val image = drawable.toBitmap().copy(Bitmap.Config.ARGB_8888, true)
val xp = (event.x - imageMatrix.values()[Matrix.MTRANS_X]) / imageMatrix.values()[Matrix.MSCALE_X]
val yp = (event.y - imageMatrix.values()[Matrix.MTRANS_Y]) / imageMatrix.values()[Matrix.MSCALE_Y]
if (xp >= 0 && xp < image.width && yp >= 0 && yp < image.height) {
doSomethingOnImage(image, xp, yp)
setImageBitmap(image)
}
return super.onTouchEvent(event)
}
...
}

Categories

Resources