I am using Camera2 API for my app to communicate to camera device of android device, Then I'm trying to make a touch to focus feature unfortunately the focus area is not accurate. I was tried different work around and suggestions in google but I can't get the right focus area when I touch the screen. Is there anyone here make this feature working smoothly using kotlin?
here's the snippet of my code the calculate the focus area
val sensorArraySize: Rect? =
mCameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE)
var y = focus_point_x
var x = focus_point_y
if (sensorArraySize != null) {
y = (focus_point_x.toFloat() / previewSize.width * sensorArraySize.height()
.toFloat()).toInt()
x = (focus_point_y.toFloat() / previewSize.height * sensorArraySize.width()
.toFloat()).toInt()
}
val halfTouchLength = 150
focusArea = MeteringRectangle(
Math.max(x - halfTouchLength, 0),
Math.max(y - halfTouchLength, 0),
halfTouchLength * 2,
halfTouchLength * 2,
MeteringRectangle.METERING_WEIGHT_MAX - 1
)
Related
I'm trying to achieve this 3d pop out of the screen kind of effect using LibGDX on Android (the camera process described at the given link):
https://www.anxious-bored.com/blog/
but the result I get is stretched objects when the eye moves around.
I simulate the eye position using a gyroscope to get device rotation and assume a fixed distance from the device screen. I've also incorporated ARCore for eye tracking, but that also yields the same result.
Here is a GIF of what I'm seeing:
https://i.stack.imgur.com/YOf6n.gif
Anyone have any idea on what I'm doing wrong?
Here is the relevant code I'm using in Kotlin.
private fun updateCamera() {
val camera = // The Perspective camera instance
// Move the camera to the eye position, but keep the same camera direction
camera.view.setToLookAt(camera.direction, camera.up).mul(GDXHelperInstances.matrix4_1.setToTranslation(GDXHelperInstances.vector3_1.set(camera.position).scl(-1f)))
// Get the size of the screen in meters
val deviceHalfHeight = camera.viewportHeight / Gdx.graphics.ppcY * 0.01f
val deviceHalfWidth = camera.viewportWidth / Gdx.graphics.ppcX * 0.01f
// Get the device position and plane relative to the camera
val pos = Vector3(Vector3.Zero).mul(camera.view)
val plane = Plane(GDXHelperInstances.vector3_1.set(camera.direction).scl(-1f), Vector3.Zero)
// Calculate the bounds of the viewport in virtual space (the device screen dimensions in meters with center at Vector3.Zero) relative to the camera (view space)
val left = pos.x - deviceHalfWidth
val right = pos.x + deviceHalfWidth
val bottom = pos.y - deviceHalfHeight
val top = pos.y + deviceHalfHeight
val nearScale = camera.near / plane.distance(camera.position).absoluteValue
// Off-axis projection
camera.projection.setToProjection(left*nearScale, right*nearScale, bottom*nearScale, top*nearScale, camera.near, camera.far)
// Calculate new asymmetrical frustum
camera.combined.set(camera.projection)
Matrix4.mul(camera.combined.`val`, camera.view.`val`)
camera.invProjectionView.set(camera.combined)
Matrix4.inv(camera.invProjectionView.`val`)
camera.frustum.update(camera.invProjectionView)
}
I'm making a ConSumo game from the arcade machine in Bully. Basically there's an enemy wrestler that move in a straight line and will bounce the player back if collided. I can't seem to figure out the logic in the angle to bounce the player when collided with an enemy wrestler.
I tried calculating the collision angle using the arctan of (player.centerY - enemy.centerY)/(player.centerX - player.centerY) and then adding 180 degree to mirror the angle.
double angle = Math.atan(((player.getCenterY() - enemies[i].getCenterY())/ (player.getCenterX() - enemies[i].getCenterX())));
angle = Math.toDegrees(angle);
angle += 180;
angle = Math.toRadians(angle);
player.addX(Math.cos(Angle) * strength);
plyaer.addY(-(Math.sin(angle) * strength));
I tried to just make the player bounce back on the same angle(i know this is not the ideal result, but i want to at least get the hang of it first, if you can suggest the better ways, i will appreciate it) but it only works on one or two side of the collision, and the other sides seem to pull the player through the enemy instead of bouncing it back.
Maybe you can try the physics approach which is taking into account conservation of impulse and conservation of energy.
Basically, the player, with mass mp, has velocity [vp; 0] and enemy, with mass me, player has velocity [ve; 0]. So no y components because they move horizontally only. Now at the time of collision t = t_col assume the center of mass of the player has coordinates [xp, yp] and the enemy's center of mass has coordinates [xe, ye] (you can always tweak them to make sure there is a greater bouncing-off effect, by making the y coordinates much more different if you wish).
Conservation of momentum tells us that the velocities of the two objects, call them [Vp, Wp] and [Ve, We] right after the collision are calculated as follows
[Vp; Wp] = [vp; 0] + (1/mp)*[I1; I2];
[Ve; We] = [ve; 0] - (1/me)*[I1; I2];
where, as is typically assumed that the impact is normal to the surface of the objects, the vector [I1; I2] can be taken to be aligned with the vector connecting the two centers: [xp - xe; yp - ye]. Combining this information with the conservation of energy, one can calculate the magnitude of the said vector and find that
k = (mp*me/(mp+me)) * (vp - ve)*(xp - xe) / ((xp - xe)^2 + (yp - ye)^2);
I1 = k*(xp - xe);
I2 = k*(yp - ye);
So basically, at time of collision you have as input:
the position and velocity of the player: [xp; yp], [vp; 0]
the position and velocity of the enemy: [xe; ye], [ve; 0]
the mass of the player mp and the mass of the enemy me
Then calculate
k = (mp*me/(mp+me)) * (vp - ve)*(xp - xe) / ((xp - xe)^2 + (yp - ye)^2);
I1 = k*(xp - xe);
I2 = k*(yp - ye);
Vp = vp + (1/mp)*I1;
Wp = (1/mp)*abs(I2);
Ve = ve - (1/me)*I1;
We = (1/me)*abs(I2);
Observe that I used abs(I2) which is the absolute value of I2. This is because for one of the two objects the y-component of the velocity after collision is going to be positive (so no difference there), but for the other one will be negative. For the negative one, we can also add the fact that the object may bounce off the ground immediately after collision (so collision with object and then collision with the ground). So we use the reflection law, kind of like the way light is reflected by a mirror.
After collision, at time t = t_col the parabolic trajectories of the two players (before they land back on the ground) will be
xp(t) = xp + Vp * (t - t_col);
yp(t) = yp + Wp * (t - t_col) - (g/2) * (t - t_col)^2;
xe(t) = xe + Ve * (t - t_col);
ye(t) = ye + We * (t - t_col) - (g/2) * (t - t_col)^2;
If you want angles:
cos(angle_p) = Vp / (Vp^2 + Wp^2);
sin(angle_p) = Wp / (Vp^2 + Wp^2);
cos(angle_e) = Ve / (Ve^2 + We^2);
sin(angle_e) = We / (Ve^2 + We^2);
where angle_p is the angle of the player and angle_e is the angle of the enemy.
I'm using ARCore to build my android app, where I allowing users to place anchors. I need to be able to check if the Anchor is in the current frame.
Any idea how can I do it?
Thanks!
I created a method based on camera.worldToScreenPoint(worldPosition). So I can check if a position is visible:
fun com.google.ar.sceneform.Camera.isWorldPositionVisible(worldPosition: Vector3): Boolean {
val var2 = com.google.ar.sceneform.math.Matrix()
com.google.ar.sceneform.math.Matrix.multiply(projectionMatrix, viewMatrix, var2)
val var5: Float = worldPosition.x
val var6: Float = worldPosition.y
val var7: Float = worldPosition.z
val var8 = var5 * var2.data[3] + var6 * var2.data[7] + var7 * var2.data[11] + 1.0f * var2.data[15]
if (var8 < 0f) {
return false
}
val var9 = Vector3()
var9.x = var5 * var2.data[0] + var6 * var2.data[4] + var7 * var2.data[8] + 1.0f * var2.data[12]
var9.x = var9.x / var8
if (var9.x !in -1f..1f) {
return false
}
var9.y = var5 * var2.data[1] + var6 * var2.data[5] + var7 * var2.data[9] + 1.0f * var2.data[13]
var9.y = var9.y / var8
return var9.y in -1f..1f
}
(And I fixed the problem that Anton Stukov said in the comments)
There is a quite simple way to do this. Let's say you have an AnchorNode attached to your anchor.
First, get the node world position:
val worldPosition = node.worldPosition
Second, use scene camera to transform world position into a screen point:
val screenPoint = arFragment.arSceneView.scene.camera.worldToScreenPoint(worldPosition)
Now just check whether the point is inside screen size bounds.
If you're using ARCore, you're probably doing frustum culling where you don't render objects that aren't within the viewable space, an optimization used to stop you from making gl calls to render "unviewable" elements of your scene.
If you have access to the objects after the renderer calculates this, then you can use that value.
Another way you can do this, is by grabbing the Camera and getting the View and Projection matrices. Then you can project the anchor coordinates onto 2D screen coordinates and if the calculated coordinates are outside the screen (ie. x/y values are > or < the screen width/height). You'll have to account for objects that are behind the camera too (the dot product between the camera forward and vector from camera to anchor should be positive).
https://developers.google.com/ar/reference/java/com/google/ar/core/Camera.html.
Hello I am doing an android app which uses OpenCV to detect rectangles/squares, to detect them I am using functions (modified a bit) from squares.cpp. Points of every square found I am storing in vector> squares, then i pass it to the function which choose the biggest one and store it in vector theBiggestSq. The problem is with the cropping function which code i will paste below (i will post the link to youtube showing the problem too). If the actual square is far enough from the camera it works ok but if i will close it a bit in some point it will hang. I will post the print screen of the problem from LogCat and there are the points printed out (the boundaries points taken from theBiggestSq vector, maybe it will help to find the solution).
void cutAndSave(vector<Point> theBiggestSq, Mat image){
RotatedRect box = minAreaRect(Mat(theBiggestSq));
// Draw bounding box in the original image (debug purposes)
//cv::Point2f vertices[4];
//box.points(vertices);
//for (int i = 0; i < 4; ++i)
//{
//cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 1, CV_AA);
//}
//cv::imshow("box", img);
//cv::imwrite("box.png", img);
// Set Region of Interest to the area defined by the box
Rect roi;
roi.x = box.center.x - (box.size.width / 2);
roi.y = box.center.y - (box.size.height / 2);
roi.width = box.size.width;
roi.height = box.size.height;
// Crop the original image to the defined ROI
//bmp=Bitmap.createBitmap(box.size.width / 2, box.size.height / 2, Bitmap.Config.ARGB_8888);
Mat crop = image(roi);
//Mat crop = image(Rect(roi.x, roi.y, roi.width, roi.height)).clone();
//Utils.matToBitmap(crop*.clone()* ,bmp);
imwrite("/sdcard/OpenCVTest/1.png", bmp);
imshow("crop", crop);
}
video of my app and its problems
cords printed respectively are: roi.x roi.y roi.width roi.height
Another problem is that the boundaries drawn should have a green colour but as you see on the video they are distorted (flexed like those boundaries would be made from glass?).
Thank you for any help. I am new in openCV doing it from only one month so please be tolerant.
EDIT:
drawing code:
//draw//
for( size_t i = 0; i < squares.size(); i++ )
{
const Point* p = &squares[i][0];
int n = (int)squares[i].size();
polylines(mBgra, &p, &n, 1, true, Scalar(255,255,0), 5, 10);
//Rect rect = boundingRect(cv::Mat(squares[i]));
//rectangle(mBgra, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);
}
This error basically tells you the cause - your ROI exceeds the image dimensions. This means that when you are extracting Rect roi from RotatedRect box then either x or y are smaller than zero, or the width/height pushes the dimensions outside the image. You should check this using something like
// Propose rectangle from data
int proposedX = box.center.x - (box.size.width / 2);
int proposedY = box.center.y - (box.size.height / 2);
int proposedW = box.size.width;
int proposedH = box.size.height;
// Ensure top-left edge is within image
roi.x = proposedX < 0 ? 0 : proposedX;
roi.y = proposedY < 0 ? 0 : proposedY;
// Ensure bottom-right edge is within image
roi.width =
(roi.x - 1 + proposedW) > image.cols ? // Will this roi exceed image?
(image.cols - 1 - roi.x) // YES: make roi go to image edge
: proposedW; // NO: continue as proposed
// Similar for height
roi.height = (roi.y - 1 + proposedH) > image.rows ? (image.rows - 1 - roi.y) : proposedH;
I am trying to pick objects in the bullet physics world but all I seem to be able to pick is the floor/ground plane!!! I am using the Vuforia SDK and have altered the ImageTargets demo code. I have used the following code to project my touched screen points to the 3d world:
void projectTouchPointsForBullet(QCAR::Vec2F point, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd, QCAR::Matrix44F &modelViewMatrix)
{
QCAR::Vec4F normalisedVector((2 * point.data[0] / screenWidth - 1),
(2 * (screenHeight-point.data[1]) / screenHeight - 1),
-1,
1);
QCAR::Matrix44F modelViewProjection;
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0] , &modelViewProjection.data[0]);
QCAR::Matrix44F inversedMatrix = SampleMath::Matrix44FInverse(modelViewProjection);
QCAR::Vec4F near_point = SampleMath::Vec4FTransform( normalisedVector,inversedMatrix);
near_point.data[3] = 1.0/near_point.data[3];
near_point = QCAR::Vec4F(near_point.data[0]*near_point.data[3], near_point.data[1]*near_point.data[3], near_point.data[2]*near_point.data[3], 1);
normalisedVector.data[2] = 1.0;//z coordinate now 1
QCAR::Vec4F far_point = SampleMath::Vec4FTransform( normalisedVector, inversedMatrix);
far_point.data[3] = 1.0/far_point.data[3];
far_point = QCAR::Vec4F(far_point.data[0]*far_point.data[3], far_point.data[1]*far_point.data[3], far_point.data[2]*far_point.data[3], 1);
lineStart = QCAR::Vec3F(near_point.data[0],near_point.data[1],near_point.data[2]);
lineEnd = QCAR::Vec3F(far_point.data[0],far_point.data[1],far_point.data[2]);
}
when I try a ray test in my physics world I only seem to be hitting the ground plane! Here is the code for the ray test call:
QCAR::Vec3F intersection, lineStart;
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
btVector3 btRayTo = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btCollisionWorld::ClosestRayResultCallback rayCallback(btRayFrom,btRayTo);
dynamicsWorld->rayTest(btRayFrom, btRayTo, rayCallback);
if(rayCallback.hasHit())
{
char* pPhysicsData = reinterpret_cast<char*>(rayCallback.m_collisionObject->getUserPointer());//my bodies have char* messages attached to them to determine what has been touched
btRigidBody* pBody = btRigidBody::upcast(rayCallback.m_collisionObject);
if (pBody && pPhysicsData)
{
LOG("handleTouches:: notifyOnTouchEvent from physics world!!!");
notifyOnTouchEvent(env, obj,0,0, pPhysicsData);
}
}
I know I am predominantly looking top-down so I am bound to hit the ground plane, I at least know my touch is being correctly projected into the world, but I have objects lying on the ground plane and I can't seem to be able to touch them! Any pointers would be greatly appreciated :)
I found out why I wasn't able to touch the objects - I am scaling the objects up when they are drawn, so I had to scale the view matrix by the same value before I projected my touch point into the 3d world (EDIT I also had the btRayFrom and btRayTo input cooordinates reversed, it is now fixed):
//top of code
int kObjectScale = 100.0f
....
...
//inside touch handler method
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,&modelViewMatrix.data[0]);
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btVector3 btRayTo = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
My touches are projected correctly now :)