Blurring a region in an image on Android (in Java) - android

In my Android app, I am capturing a screenshot programmatically from a background service. I obtain it as a Bitmap.
Next, I obtain the co-ordinates of a region of interest (ROI) with the following Android framework API:
Rect ROI = new Rect();
viewNode.getBoundsInScreen(ROI);
Here, getBoundsInScreen() is the Android equivalent of the Javascript function getBoundingClientRect().
A Rect in Android has the following properties:
rect.top
rect.left
rect.right
rect.bottom
rect.height()
rect.width()
rect.centerX() /* rounded off to integer */
rect.centerY()
rect.exactCenterX() /* exact value in float */
rect.exactCenterY()
What does top, left, right and bottom mean in Android Rect
object
Whereas a Rect in OpenCV has the following properties
rect.width
rect.height
rect.x /* x coordinate of the top-left corner */
rect.y /* y coordinate of the top-left corner */
Now before we can perform any OpenCV-related operations, we need to transform the Android Rect to an OpenCV Rect.
Understanding how actually drawRect or drawing coordinates work in Android
There are two ways to convert an Android Rect to an OpenCV Rect (as suggested by Karl Phillip in his answer). Both generate the same values and both produce the same result:
/* Compute the top-left corner using the center point of the rectangle. */
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);
// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;
int w = androidRect.width();
int h = androidRect.height();
org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);
Now one of the OpenCV operations I am performing is blurring the ROI within the screenshot:
Mat originalMat = new Mat();
Bitmap configuredBitmap32 = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(configuredBitmap32, originalMat);
Mat ROIMat = originalMat.submat(roi).clone();
Imgproc.GaussianBlur(ROIMat, ROIMat, new org.opencv.core.Size(0, 0), 5, 5);
ROIMat.copyTo(originalMat.submat(roi));
Bitmap blurredBitmap = Bitmap.createBitmap(originalMat.cols(), originalMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(originalMat, blurredBitmap);
This brings us very close to the desired result. Almost there, but not
quite. The area just BENEATH the targeted region is blurred.
For example, if the targeted region of interest is a password field, the above code produces the following results:
On the left, Microsoft Live ROI, and on the right, Pinterest ROI:
As can be seen, the area just below the ROI gets blurred.
So my question is, finally, why isn't the exact region of interest
blurred?
The co-ordinates obtained through the Android API getBoundsInScreen() appear to be correct.
Converting an Android Rect to an OpenCV Rect also appears to be correct. Or is it?
The code for blurring a region of interest also appears to be correct. Is there another way to do the same thing?
N.B: I've provided the actual, full-size screenshots as I am getting them. They have been scaled down by 50% to fit in this post, but other than that they are exactly as I am getting them on the Android device.

If I'm not mistaken, OpenCV's Rect assumes that x and y specify the top left corner of the rectangle:
/* Compute the top-left corner using the center point of the rectangle
* TODO: take care of float to int conversion
*/
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);
// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;
int w = androidRect.width();
int h = androidRect.height();
org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);

As per ScreenShots the value you get fot rect.x is not same for opencv rect.
Because android rect center x value get from pixel density of screen while opencv rect value react image pixel row and column.
if u find height of image and total rows of original mat both are different,but for perfect place rect they should be same so u have to multiply distance with some constant value to get accurate distance of rect.

Related

Trouble getting the top and bottom of an imageView

I have a simple drawing app where I have a picture of a letter in the back filling an ImageView set to 45% of the screen's height. I have a JSON file that stores points along the letter. I'm trying to display those points on top of the picture of the letter.
Those points' coordinates range between y = -440 and y = 200. In order to properly display the points I need the top and the height of the imageView containing the letter to map the points onto the screen. I have to map the points to the proper points in runtime because with different screen sizes need different scale to display the points properly.
This is what it should look like (this is with a phone specific correction factor):
This is what it actually looks like:
I render the drawing via a Canvas that I paint the points onto. I'm pretty sure the problem lies in how I'm getting the top of the ImageView.
Here's what I've tried:
//1
float y = view.getTop();
//2
final int[] screenPos = new int[2];
iv.getLocationOnScreen(screenPos);
float y = screenPos[1];
//3
float offset = iv.getTop() - screenPos[1];
float y = iv.getTop() + offset;
Is there something I'm supposed to be doing that I'm not? Is there a better way the relative returns of getTop()? Help.
It will give you the position relative to the screen
Rect rect = new Rect();
view.getGlobalVisibleRect(rect);
float top = rect.top;
//there is also rect.bottom, rect.width, rect.height etc.

Find the center of a shape (path) on a canvas (Android Studio)

Is it possible to find the center of a shape on a canvas in Android studio? I've logged the touch points and can't seem to figure it out. It does however appear that the canvas (0,0) point is the top left, but the paths/shapes on the canvas see the center point as (0,0).
For an example of what I'm talking about check out the attached image, I'm trying to find where the green dot is.
Thanks in advance for any help.
IMG
To find the center of a Path use the method computeBounds(bounds,exact) which will set the first argument RectF bounds to the extent of the Path. Then it is just a matter of getting the mean of the left right and top bottom coordinates to get the geometric center of the path.
// mPath is your path. Must contain more than 1 path point
RectF bounds = new RectF();
mPath.computeBounds(bounds, false); // fills rect with bounds
PointF center = new PointF((bounds.left + bounds.right) / 2,
(bounds.top + bounds.bottom) / 2);
No Need for Mathematical Calculations
RectF bounds = new RectF();
mPath.computeBounds(bounds, false);
float centerX = rectF.centerX();
float centerY = rectF.centerY();

How to place a rectangle at the center of the camera frame in opencv

I have taken the current frame under mRGBA in this fashion:
mRgba = inputFrame.rgba();
then created a rectangle object which takes the height and width of the current camera frame
rect = new Rect();
rect.width = mRgba.width();
rect.height = mRgba.height();
it takes the whole space of the frame but when i try to shrink this rectangle it got shrink in a one side(not as a whole which i need)
So i tried to find the rectangles center and then tried to create another rectangle according to that center and a predefined size
int x = (int) (rect.tl().x + rect.br().x)/2;
int y = (int) (rect.tl().y + rect.br().y)/2;
Rect rect1 = new Rect(x,y,280,280);
Imgproc.rectangle(mRgba, rect1.tl(), rect1.br(), new Scalar(255, 0, 0), 2, 8, 0);
But yet its not in the center!! i am not sure about the parameters the rectangle object takes i didn't find the document of opencv that much helpful. So how to overcome this situation i want the rectangle to be exactly at the center of the camera frame.
Base on my observation, the line below is actually creating a rectangle at a corner (x,y) with width 280 and height 280.
Rect rect1 = new Rect(x,y,280,280);
width(280)
<------(x,y)
|
| height(280)
|
V
so, your calculation for the point of center should be correct.
I hope the codes below may help you.
int width = 280;
int height = 280;
Rect rect1 = new Rect(x - width / 2, y - height / 2, width, height);
EDIT:
Thanks for Micka's reminder. My concept above is incorrect, although it works.
The Android camera is landscape in default. Usually, we rotate the image 90.
(Ref: Android - Camera preview is sideways)
Your calculation is based on a landscape image and openCV's coordinate system which is different from what we usually use. (Ref: Reference coordinate system changes between OpenCV, OpenGL and Android Sensor)
It's difficult to explain in words, so I draw a picture for you :)

In Android, What kind of transform does mapRect api of Matrix class performs?

I would like to know about the function of mapRect api available under Matrix class in Android. If I have a sample Matrix A and Rectangle R, then for
RectF R = new RectF(t1,t2,t3,t4);
A.mapRect(R);
what kind of transformation is likely to happen to R. It would be more helpful if someone can illustrate the mapRect() api with some suitable examples.
Here's a very simple example:
Let's take a matrix:
Matrix matrix = new Matrix();
Set that matrix to scale everything twice as large:
matrix.setScale(2.0F, 2.0F);
Create a rectangle that is 10x10 with origin in upper left corner:
RectF rect = new RectF(0F, 0F, 10F, 10F);
So when we call
matrix.mapRect(rect);
the input rectangle we created is replaced with the output rectangle, which is the result of transforming the input:
rect.left = 0F;
rect.top = 0F;
rect.right = 20F;
rect.bottom = 20F;
There is another version of the method
matrix.mapRect(RectF dst, RectF src);
that does the same transform without affecting the input rectangle.
What is a matrix?
Consider a mirror. The mirror takes your image and creates a horizontally flipped version of your image.
Consider a microphone and an amplifier. They take your voice and create a louder version of your voice.
That's what a matrix is. It's a transformer. It takes an input and creates an output that is based on the input. So a matrix can transform a point, a rectangle, a circle, a polygon...
For more info, see my answer How does Matrix.postScale( sx, sy, px, py) work?
Also check out Affine transformations | Wikipedia. There is an awesome graphic that shows the different affine transforms and their effects.

Android - calculating pixel rotation without matrix? And checking if pixel is in view

I'm hoping someone can help me out. I'm making an image manipulation app, and I found I needed a better way to load in large images.
My plan, is to iterate through "hypothetical" pixels of an image (a "for loop" that covers width/height of the base image, so each iteration represents a pixel), scale/translate/rotate that pixels position relative to the view, then use this information to determine which pixels are being displayed in the view itself, then use a combination of BitmapRegionDecoder and BitmapFactory.Options to load in only the section of image that the output actually needs rather than a full (even if scaled) image.
So far I seem to have covered scale of the image and translation properly, but I can't seem to figure out how to calculate rotation. Since it's not a real Bitmap pixel I can't use Matrix.rotate =( Here is the image translations in the onDraw of the view, imgPosX and imgPosY hold the center point of the image:
m.setTranslate(-userImage.getWidth() / 2.0f, -userImage.getHeight() / 2.0f);
m.postScale(curScale, curScale);
m.postRotate(angle);
m.postTranslate(imgPosX, imgPosY);
mCanvas.drawBitmap(userImage.get(), m, paint);
and here is the math so far of how I'm trying to determine if an images pixel is on the screen:
for(int j = 0;j < imageHeight;j++) {
for(int i = 0;i < imageWidth;i++) {
//image starts completely center in view, assume image is original size for simplicity
//this is the original starting position for each pixel
int x = Math.round(((float) viewSizeWidth / 2.0f) - ((float) newImageWidth / 2.0f) + i);
int y = Math.round(((float) viewSizeHeight / 2.0f) - ((float) newImageHeight / 2.0f) + j);
//first we scale the pixel here, easy operation
x = Math.round(x * imageScale);
y = Math.round(y * imageScale);
//now we translate, we do this by determining how many pixels
//our images x/y coordinates have differed from it's original
//starting point, imgPosX and imgPosY in the view start in center
//of view
x = x + Math.round((imgPosX - ((float) viewSizeWidth / 2.0f)));
y = y + Math.round((imgPosY - ((float) viewSizeHeight / 2.0f)));
//TODO need rotation here
}
}
so, assuming my math up until rotation is correct (probably not but it appears to be working so far), how would I then calculate the rotation from that pixels position? I've tried other similar questions like:
Link 1
Link 2
Link 3
without using rotation the pixels I expect to actually be on the screen are represented (I made text file that outputs the results in 1's and 0's so I can have a visual representation of whats on the screen), but with the formula found in those questions the information isn't what is expected. (Scenario: I've rotated an image so only the top left corner is visible in the view. Using the info from Here to rotate the pixel, I should expect to see a triangular set of 1's in the upper left corner of the output file, but that's not the case)
So, how would I calculate a a pixels position after rotation without using the Android matrix? But still get the same results.
And if I've just messed it up entirely my apologies =( Any help would be appreciated, this project has gone on for so long and I want to finally be done lol
If you need any more information I will provide as much as I possibly can =) Thank you for your time
I realize this question is particularly difficult so I will be posting a bounty as soon as SO allows.
You do not need to create your own Matrix, use the existing one.
http://developer.android.com/reference/android/graphics/Matrix.html
You can map bitmap coordinates to screen coordinates by using
float[] coords = {x, y};
m.mapPoints(coords);
float sx = coords[0];
float sy = coords[1];
If you want to map screen to bitmap coordinates, you can create the inverse matrix
Matrix inverse = new Matrix(m);
inverse.inverse();
inverse.mapPoints(...)
I think your overall approach is going to be slow, as doing the pixel manipulation on the CU from Java has a lot of overhead. When drawing bitmaps normally, the pixel manipulation is done on the GPU.

Categories

Resources