Let's say I start with an Bitmap that's 1000px x 1000px
I load it into a SurfaceView resident Canvas that displayed at some arbitrary (and depending on the device different) dimensions.
I can get those dimensions at runtime and measure the scale between them and the original (if I need that information at the end).
Then I allow the user to pinch/zoom/translate the image around the displayed canvas. All the while I have a Matrix which keeps track of, and is used to re-draw the image in its displayed screen region.
Subsequently this Matrix's values all apply to the scaled space (and not the original 1000x1000 graphic).
So far so good - I have all this working.
However, when all is said and done, I'd like to apply this Matrix to the original Bitmap and save it out. However, I'm at a loss as to how to modify all its internal values to apply it back to the unscaled original (1000x1000) size.
Curious if there's some auto-magical way to translate these or if I have to somehow apply each value based on the scale between the two sizes back to a new Matrix.
To invert a matrix
matrix{ a,b,c,d,e,f } and the inverse is matrix { ia, ib, ic, id, ie, if }
var cross = a * d - b * c;
ia = d / cross;
ib = -b / cross;
ic = -c / cross;
id = a / cross;
ie = (c * f - d * e) / cross;
if = -(a * f - b * e) / cross;
Reverse the transform, the original from image coordinates to screen coordinates and the inverse matrix transforms screen coordinates to image coordinates.
If you have a transform on the screen and you want to know where on the image the top left of the screen is. Get the inverse transform and apply it to the screen coordinate (0,0) top left.
scrX = ?
scrY = ?
imageX = scrX * ia + scrY * ic + ie;
imageY = scrX * ib + scrY * id + if;
Related
Is there a way to check if I touched the object on the screen ? As I understand the HitResult class allows me to check if I touched the recognized and maped surface. But I want to check this I touched the object that is set on that surface.
ARCore doesn't really have a concept of an object, so we can't directly provide that. I suggest looking at ray-sphere tests for a starting point.
However, I can help with getting the ray itself (to be added to HelloArActivity):
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
If you're calling this several times per frame see the comment about the getProjectionMatrix and invertM calls.
Apart from Mouse Picking with Ray Casting, cf. Ian's answer, the other commonly used technique is a picking buffer, explained in detail (with C++ code) here
The trick behind 3D picking is very simple. We will attach a running
index to each triangle and have the FS output the index of the
triangle that the pixel belongs to. The end result is that we get a
"color" buffer that doesn't really contain colors. Instead, for each
pixel which is covered by some primitive we get the index of this
primitive. When the mouse is clicked on the window we will read back
that index (according to the location of the mouse) and render the
select triangle red. By combining a depth buffer in the process we
guarantee that when several primitives are overlapping the same pixel
we get the index of the top-most primitive (closest to the camera).
So in a nutshell:
Every object's draw method needs an ongoing index and a boolean for whether this draw renders the pixel buffer or not.
The render method converts the index into a grayscale color and the scene is rendered
After the whole rendering is done, retrieve the pixel color at the touch position GL11.glReadPixels(x, y, /*the x and y of the pixel you want the colour of*/). Then translate the color back to an index and the index back to an object. VoilĂ , you have your clicked object.
To be fair, for a mobile usecase you should probably read a 10x10 rectangle, iterate trough it and pick the first found non-background color - because touches are never that precise.
This approach works independently of the complexity of your objects
I am trying to scan a passport page using the phone's camera using OpenCV.
In the above image the contour marked in red is my ROI (will need a top view of that). Performing segmentation I can detect the MRZ area. And the pages should have a fixed aspect ratio. Is there a way to scale the green contour using the aspect ratio to approximate the red one? I have tried finding the corners of the green rect using approxPolyDP, and then scaling that rect and finally doing a perspective warp to get the top view. The problem is that the perspective rotation is not accounted for while doing the rectangular scaling, so the final rect is often wrong.
Often I get an output as marked in the following image
Update: Adding a little more explanation
In regard to the 1st image (assuming the red rect will always have a constant aspect ratio),
My goal: is to crop out the red marked portion and then get a top view
My approach: detect the MRZ/green rect -> now assume the bottom edge of the green rect is the same as the red one (close enough) -> So I got the width and two corners of the rect -> calculate other two corners using the height/aspect ratio
Problem: my above calculation doesn't output the red rect, instead it outputs the green rect in the 2nd image (may be because those quadrilaterals aren't rectangles, angle between edges aren't either 0 or 90 degrees)
As far as I understand your main goal is to get the top view of the passport page when its photo is taken from arbitrary angle.
Also as I understand your approach is the following:
Find MRZ and its wrapping polygon
Extend the MRZ polygon to the top - this would give you the page polygon
Warp perspective to get the top view.
And the main obstacle currently is to extend the polygon.
Please correct me If understood the goal incorrectly.
Extending a polygon is quiet easy from mathematical perspective. Points on each side of the polygon form a side line. If you draw the line further you can put there a new point. Programmatically it may look like this
new_left_top_x = old_left_bottom_x + (old_left_top_x - old_left_bottom_x) * pass_height_to_MRZ_height_ratio
new_left_top_y = old_left_bottom_y + (old_left_top_y - old_left_bottom_y) * pass_height_to_MRZ_height_ratio
The same can be done for the right part. This approach would also work with rotations up to 45 degrees.
However I'm afraid this approach would not give accurate results. I would suggest to detect the passport page itself instead of MRZ. The reason is that the page itself is quiet noticeable object on the photo and can be easily found by findContours function.
I wrote some code to illustrate the idea that detecting MRZ is not really necessary.
import os
import imutils
import numpy as np
import argparse
import cv2
# Thresholds
passport_page_aspect_ratio = 1.44
passport_page_coverage_ratio_threshold = 0.6
morph_size = (4, 4)
def pre_process_image(image):
# Let's get rid of color first
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Then apply Otsu threshold to reveal important areas
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# erode white areas to "disconnect" them
# and dilate back to restore their original shape
morph_struct = cv2.getStructuringElement(cv2.MORPH_RECT, morph_size)
thresh = cv2.erode(thresh, morph_struct, anchor=(-1, -1), iterations=1)
thresh = cv2.dilate(thresh, morph_struct, anchor=(-1, -1), iterations=1)
return thresh
def find_passport_page_polygon(image):
cnts = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for cnt in cnts:
# compute the aspect ratio and coverage ratio of the bounding box
# width to the width of the image
(x, y, w, h) = cv2.boundingRect(cnt)
ar = w / float(h)
cr_width = w / float(image.shape[1])
# check to see if the aspect ratio and coverage width are within thresholds
if ar > passport_page_aspect_ratio and cr_width > passport_page_coverage_ratio_threshold:
# approximate the contour with a polygon with 4 points
epsilon = 0.02 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
return approx
return None
def order_points(pts):
# initialize a list of coordinates that will be ordered in the order:
# top-left, top-right, bottom-right, bottom-left
rect = np.zeros((4, 2), dtype="float32")
pts = pts.reshape(4, 2)
# the top-left point will have the smallest sum, whereas
# the bottom-right point will have the largest sum
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# now, compute the difference between the points, the
# top-right point will have the smallest difference,
# whereas the bottom-left will have the largest difference
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
return rect
def get_passport_top_vew(image, pts):
rect = order_points(pts)
(tl, tr, br, bl) = rect
# compute the height of the new image, which will be the
# maximum distance between the top-right and bottom-right
# y-coordinates or the top-left and bottom-left y-coordinates
height_a = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
height_b = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))
max_height = max(int(height_a), int(height_b))
# compute the width using standard passport page aspect ratio
max_width = int(max_height * passport_page_aspect_ratio)
# construct the set of destination points to obtain the top view, specifying points
# in the top-left, top-right, bottom-right, and bottom-left order
dst = np.array([
[0, 0],
[max_width - 1, 0],
[max_width - 1, max_height - 1],
[0, max_height - 1]], dtype="float32")
# compute the perspective transform matrix and apply it
M = cv2.getPerspectiveTransform(rect, dst)
warped = cv2.warpPerspective(image, M, (max_width, max_height))
return warped
if __name__ == "__main__":
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="path to images directory")
args = vars(ap.parse_args())
in_file = args["image"]
filename_base = in_file.replace(os.path.splitext(in_file)[1], "")
img = cv2.imread(in_file)
pre_processed = pre_process_image(img)
# Visualizing pre-processed image
cv2.imwrite(filename_base + ".pre.png", pre_processed)
page_polygon = find_passport_page_polygon(pre_processed)
if page_polygon is not None:
# Visualizing found page polygon
vis = img.copy()
cv2.polylines(vis, [page_polygon], True, (0, 255, 0), 2)
cv2.imwrite(filename_base + ".bounds.png", vis)
# Visualizing the warped top view of the passport page
top_view_page = get_passport_top_vew(img, page_polygon)
cv2.imwrite(filename_base + ".top.png", top_view_page)
The results I got:
For better result it would be also good to compensate the camera aperture distortion.
I have a renderer using directx and openGL, and a 3d scene. The viewport and the window are of the same dimensions.
How do I implement picking given mouse coordinates x and y in a platform independent way?
If you can, do the picking on the CPU by calculating a ray from the eye through the mouse pointer and intersect it with your models.
If this isn't an option I would go with some type of ID rendering. Assign each object you want to pick a unique color, render the objects with these colors and finally read out the color from the framebuffer under the mouse pointer.
EDIT: If the question is how to construct the ray from the mouse coordinates you need the following: a projection matrix P and the camera transform C. If the coordinates of the mouse pointer is (x, y) and the size of the viewport is (width, height) one position in clip space along the ray is:
mouse_clip = [
float(x) * 2 / float(width) - 1,
1 - float(y) * 2 / float(height),
0,
1]
(Notice that I flipped the y-axis since often the origin of the mouse coordinates are in the upper left corner)
The following is also true:
mouse_clip = P * C * mouse_worldspace
Which gives:
mouse_worldspace = inverse(C) * inverse(P) * mouse_clip
We now have:
p = C.position(); //origin of camera in worldspace
n = normalize(mouse_worldspace - p); //unit vector from p through mouse pos in worldspace
Here's the viewing frustum:
First you need to determine where on the nearplane the mouse click happened:
rescale the window coordinates (0..640,0..480) to [-1,1], with (-1,-1) at the bottom-left corner and (1,1) at the top-right.
'undo' the projection by multiplying the scaled coordinates by what I call the 'unview' matrix: unview = (P * M).inverse() = M.inverse() * P.inverse(), where M is the ModelView matrix and P is the projection matrix.
Then determine where the camera is in worldspace, and draw a ray starting at the camera and passing through the point you found on the nearplane.
The camera is at M.inverse().col(4), i.e. the final column of the inverse ModelView matrix.
Final pseudocode:
normalised_x = 2 * mouse_x / win_width - 1
normalised_y = 1 - 2 * mouse_y / win_height
// note the y pos is inverted, so +y is at the top of the screen
unviewMat = (projectionMat * modelViewMat).inverse()
near_point = unviewMat * Vec(normalised_x, normalised_y, 0, 1)
camera_pos = ray_origin = modelViewMat.inverse().col(4)
ray_dir = near_point - camera_pos
Well, pretty simple, the theory behind this is always the same
1) Unproject two times your 2D coordinate onto the 3D space. (each API has its own function, but you can implement your own if you want). One at Min Z, one at Max Z.
2) With these two values calculate the vector that goes from Min Z and point to Max Z.
3) With the vector and a point calculate the ray that goes from Min Z to MaxZ
4) Now you have a ray, with this you can do a ray-triangle/ray-plane/ray-something intersection and get your result...
I have little DirectX experience, but I'm sure it's similar to OpenGL. What you want is the gluUnproject call.
Assuming you have a valid Z buffer you can query the contents of the Z buffer at a mouse position with:
// obtain the viewport, modelview matrix and projection matrix
// you may keep the viewport and projection matrices throughout the program if you don't change them
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
// obtain the Z position (not world coordinates but in range 0 - 1)
GLfloat z_cursor;
glReadPixels(x_cursor, y_cursor, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z_cursor);
// obtain the world coordinates
GLdouble x, y, z;
gluUnProject(x_cursor, y_cursor, z_cursor, modelview, projection, viewport, &x, &y, &z);
if you don't want to use glu you can also implement the gluUnProject you could also implement it yourself, it's functionality is relatively simple and is described at opengl.org
Ok, this topic is old but it was the best I found on the topic, and it helped me a bit, so I'll post here for those who are are following ;-)
This is the way I got it to work without having to compute the inverse of Projection matrix:
void Application::leftButtonPress(u32 x, u32 y){
GL::Viewport vp = GL::getViewport(); // just a call to glGet GL_VIEWPORT
vec3f p = vec3f::from(
((float)(vp.width - x) / (float)vp.width),
((float)y / (float)vp.height),
1.);
// alternatively vec3f p = vec3f::from(
// ((float)x / (float)vp.width),
// ((float)(vp.height - y) / (float)vp.height),
// 1.);
p *= vec3f::from(APP_FRUSTUM_WIDTH, APP_FRUSTUM_HEIGHT, 1.);
p += vec3f::from(APP_FRUSTUM_LEFT, APP_FRUSTUM_BOTTOM, 0.);
// now p elements are in (-1, 1)
vec3f near = p * vec3f::from(APP_FRUSTUM_NEAR);
vec3f far = p * vec3f::from(APP_FRUSTUM_FAR);
// ray in world coordinates
Ray ray = { _camera->getPos(), -(_camera->getBasis() * (far - near).normalize()) };
_ray->set(ray.origin, ray.dir, 10000.); // this is a debugging vertex array to see the Ray on screen
Node* node = _scene->collide(ray, Transform());
cout << "node is : " << node << endl;
}
This assumes a perspective projection, but the question never arises for the orthographic one in the first place.
I've got the same situation with ordinary ray picking, but something is wrong. I've performed the unproject operation the proper way, but it just doesn't work. I think, I've made some mistake, but can't figure out where. My matix multiplication , inverse and vector by matix multiplications all seen to work fine, I've tested them.
In my code I'm reacting on WM_LBUTTONDOWN. So lParam returns [Y][X] coordinates as 2 words in a dword. I extract them, then convert to normalized space, I've checked this part also works fine. When I click the lower left corner - I'm getting close values to -1 -1 and good values for all 3 other corners. I'm then using linepoins.vtx array for debug and It's not even close to reality.
unsigned int x_coord=lParam&0x0000ffff; //X RAW COORD
unsigned int y_coord=client_area.bottom-(lParam>>16); //Y RAW COORD
double xn=((double)x_coord/client_area.right)*2-1; //X [-1 +1]
double yn=1-((double)y_coord/client_area.bottom)*2;//Y [-1 +1]
_declspec(align(16))gl_vec4 pt_eye(xn,yn,0.0,1.0);
gl_mat4 view_matrix_inversed;
gl_mat4 projection_matrix_inversed;
cam.matrixProjection.inverse(&projection_matrix_inversed);
cam.matrixView.inverse(&view_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&projection_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&view_matrix_inversed);
line_points.vtx[line_points.count*4]=pt_eye.x-cam.pos.x;
line_points.vtx[line_points.count*4+1]=pt_eye.y-cam.pos.y;
line_points.vtx[line_points.count*4+2]=pt_eye.z-cam.pos.z;
line_points.vtx[line_points.count*4+3]=1.0;
So this is my second question today, I might be pushing my luck
In short making a 3D first Person, where you can move about and look around.
In My OnDrawFrame I am using
Matrix.setLookAtM(mViewMatrix, 0, eyeX , eyeY, eyeZ , lookX , lookY , lookZ , upX, upY, upZ);
To move back, forth, sidestep left etc I use something like this(forward code listed)
float v[] = {mRenderer.lookX - mRenderer.eyeX,mRenderer.lookY - mRenderer.eyeY, mRenderer.lookZ - mRenderer.eyeZ};
mRenderer.eyeX += v[0] * SPEED_MOVE;
mRenderer.eyeZ += v[2] * SPEED_MOVE;
mRenderer.lookX += v[0] * SPEED_MOVE;
mRenderer.lookZ += v[2] * SPEED_MOVE;
This works
Now I want to look around and I tried to port my iPhone openGL 1.0 code. This is left/right
float v[] = {mRenderer.lookX - mRenderer.eyeX,mRenderer.lookY - mRenderer.eyeY, mRenderer.lookZ - mRenderer.eyeZ};
if (x > mPreviousX )
{
mRenderer.lookX += ((Math.cos(SPEED_TURN / 2) * v[0]) - (Math.sin(SPEED_TURN / 2) * v[2]));
mRenderer.lookZ += ((Math.sin(SPEED_TURN / 2) * v[0]) + (Math.cos(SPEED_TURN / 2) * v[2]));
}
else
{
mRenderer.lookX -= (Math.cos(SPEED_TURN / 2) *v[0] - Math.sin(SPEED_TURN / 2) * v[2]);
mRenderer.lookZ -= (Math.sin(SPEED_TURN / 2) *v[0] + Math.cos(SPEED_TURN / 2) * v[2]);
}
This works for like 35 degrees and then goes mental?
Any ideas?
First of all I would suggest not to trace the look vector but rather forward vector, then in lookAt method use eye+forward to generate look vector. This way you can loose the update on the look completely when moving, and you don't need to compute the v vector (mRenderer.eyeX += forward.x * SPEED_MOVE;...)
To make things more simple I suggest that you normalize the vectors forward and up whenever you change them (and I will consider as you did in following methods).
Now as for rotation there are 2 ways. Either use right and up vectors to move the forward (and up) which is great for small turning (I'd say about up to 10 degrees and is capped at 90 degrees) or compute the current angle, add any angle you want and recreate the vectors.
The first mentioned method on rotating is quite simple:
vector forward = forward
vector up = up
vector right = cross(forward, up) //this one might be the other way around as up, forward :)
//going left or right:
forward = normalized(forward + right*rotationSpeedX)
//going up or down:
forward = normalized(forward + up*rotationSpeedY)
vector right = cross(forward, up) //this one might be the other way around
vector up = normalized(cross(forward, right)) //this one might be the other way around
//tilt left or right:
up = normalized(up + right*rotationZ)
The second method needs a bit trigonometry:
Normally to compute an angle you could just call atan(forward.z/forward.x) and add some if statements since the produced result is only in 180 degrees angle (I am sure you will be able to find some answers on the web to get rotation from vector though). The same goes with up vector for getting the vertical rotation. Then after you get the angles you can easily just add some degrees to the angles and recreate the vectors with sin and cos. There is a catch though, if you rotate the camera in such way, that forward faces straight up(0,1,0) you need to get the first rotation from up vector and the second from forward vector but you can avoid all that if you cap the maximum vertical angle to something like +- 85 degrees (and there are many games that actually do that). The second thing is if you use this approach your environment must support +-infinitive or this atan(forward.z/forward.x) will brake if forward.x == 0.
And some addition about the first approach. Since I see you are trying to move around the 2D space your forward vector to use with movement speed should be normalized(forward.x, 0, forward.z), it is important to normalize it or you will be moving slower if camera tilts up or down more.
Second thing is when you rotate left/right you might want to force up vector to (0,1,0) + normalize right vector and lastly recreate the up vector from forward and right. Again you then should cap the vertical rotation (up.z should be larger then some small value like .01)
It turned out my rotation code was wrong
if (x > mPreviousX )
{
mRenderer.lookX = (float) (mRenderer.eyeX + ((Math.cos(SPEED_TURN / 2) * v[0]) - (Math.sin(SPEED_TURN / 2) * v[2])));
mRenderer.lookZ = (float) (mRenderer.eyeZ + ((Math.sin(SPEED_TURN / 2) * v[0]) + (Math.cos(SPEED_TURN / 2) * v[2])));
}
else
{
mRenderer.lookX = (float) (mRenderer.eyeX + ((Math.cos(-SPEED_TURN / 2) * v[0]) - (Math.sin(-SPEED_TURN / 2) * v[2])));
mRenderer.lookZ = (float) (mRenderer.eyeZ + ((Math.sin(-SPEED_TURN / 2) * v[0]) + (Math.cos(-SPEED_TURN / 2) * v[2])));
}
I have a java code for SVG drawing. It processes transforms including rotate, and does this very well, as far as I can see in numerous test pictures compared against their rendering in Chrome. Next what I need is to get actual object location, which is in many images declared via transforms. So I decided just to read X and Y from Matrix used for drawing. Unfortunately I get incorrect values for rotate transform, that is they do not correspond to real object location in the image.
The stripped down code looks like this:
Matrix matrix = new Matrix();
float cx = 1000; // suppose this is an object X coordinate
float cy = 300; // this is its Y coordinate
float angle = -90; // rotate counterclockwise, got from "rotate(-90, 1000, 300)"
// shift to -X,-Y, so object is in the center
matrix.postTranslate(-cx, -cy);
// rotate actually
matrix.postRotate(angle);
// shift back
matrix.postTranslate(cx, cy);
// debug goes here
float[] values = new float[9];
matrix.getValues(values);
Log.v("HELLO", values[Matrix.MTRANS_X] + " " + values[Matrix.MTRANS_Y]);
The log outputs the values 700 and 1300 respectively. I'd expect 0 and 0, because I see the object rotated inplace in my image (that is there is no any movement), and postTranslate calls should compensate each other. Of course, I see how these values are formed from 1000 and 300, but don't understand why. Once again, I point out that the matrix with these strange values is used for actual object drawing, and it looks correct. Could someone explain what happens here? Am I missing something? So far I have only one solution of my problem: just do not try to obtain position from rotate, do it only for explicit matrix and translate transforms. But this approach lacks generality, and anyway I thought matrix should have reasonable values (including offsets) for any transformation type.
The answer is that the matrix is an operator for space transformation, and should not be used for direct extraction of object position. Instead, one should get initial object coordinates, as specified in x and y attributes of an SVG tag, and apply the matrix on them:
float[] src = new float[2];
src[0] = cx;
src[1] = cy;
matrix.mapPoints(src);
After this we get proper location values in x and y variables.