Off-center camera projection for 3d illusion using LibGDX - android

I'm trying to achieve this 3d pop out of the screen kind of effect using LibGDX on Android (the camera process described at the given link):
https://www.anxious-bored.com/blog/
but the result I get is stretched objects when the eye moves around.
I simulate the eye position using a gyroscope to get device rotation and assume a fixed distance from the device screen. I've also incorporated ARCore for eye tracking, but that also yields the same result.
Here is a GIF of what I'm seeing:
https://i.stack.imgur.com/YOf6n.gif
Anyone have any idea on what I'm doing wrong?
Here is the relevant code I'm using in Kotlin.
private fun updateCamera() {
val camera = // The Perspective camera instance
// Move the camera to the eye position, but keep the same camera direction
camera.view.setToLookAt(camera.direction, camera.up).mul(GDXHelperInstances.matrix4_1.setToTranslation(GDXHelperInstances.vector3_1.set(camera.position).scl(-1f)))
// Get the size of the screen in meters
val deviceHalfHeight = camera.viewportHeight / Gdx.graphics.ppcY * 0.01f
val deviceHalfWidth = camera.viewportWidth / Gdx.graphics.ppcX * 0.01f
// Get the device position and plane relative to the camera
val pos = Vector3(Vector3.Zero).mul(camera.view)
val plane = Plane(GDXHelperInstances.vector3_1.set(camera.direction).scl(-1f), Vector3.Zero)
// Calculate the bounds of the viewport in virtual space (the device screen dimensions in meters with center at Vector3.Zero) relative to the camera (view space)
val left = pos.x - deviceHalfWidth
val right = pos.x + deviceHalfWidth
val bottom = pos.y - deviceHalfHeight
val top = pos.y + deviceHalfHeight
val nearScale = camera.near / plane.distance(camera.position).absoluteValue
// Off-axis projection
camera.projection.setToProjection(left*nearScale, right*nearScale, bottom*nearScale, top*nearScale, camera.near, camera.far)
// Calculate new asymmetrical frustum
camera.combined.set(camera.projection)
Matrix4.mul(camera.combined.`val`, camera.view.`val`)
camera.invProjectionView.set(camera.combined)
Matrix4.inv(camera.invProjectionView.`val`)
camera.frustum.update(camera.invProjectionView)
}

Related

Here Map zoom with map padding

Currently I'm using the version 4.12.x from here map (Navigate edition), I want to display a point from A to B or bounding box but with some padding, I can make the animation however In landscape I have a "loading" screen that takes almost half of the screen so the location and the waypoint/coordinate might not be visible for example point A is in the south and point B is in the north so if the distance is far enough I can't see the waypoints.
Currently I'm trying this:
val orientation = GeoOrientationUpdate(null, 0.0)
val sizeInPixels = Size2D(
binding.hereMapsMapView.width.toDouble() - 100,
binding.hereMapsMapView.height.toDouble() -100
)
val origin = Point2D(1.0, 1.0)
val mapViewport = Rectangle2D(origin, sizeInPixels)
val cameraUpdate =MapCameraUpdateFactory.lookAt(geoBox, orientation, mapViewport)
val animation = MapCameraAnimationFactory.createAnimation(
cameraUpdate,
Duration.ofMillis(CAMERA_ANIMATION_TIME_IN_MILLIS),
EasingFunction.IN_OUT_SINE
)
binding.hereMapsMapView.camera.startAnimation(animation) {
if (it == AnimationState.COMPLETED || it == AnimationState.CANCELLED) {
// complete
}
}
I check the documents but I can't find a lot of info let's say my screen is 2044 px high and my "loading view" is 1200 px high.
I try to do this
val sizeInPixels = Size2D(
binding.hereMapsMapView.width.toDouble() - 100,
binding.hereMapsMapView.height.toDouble() + 1200px
)
But it does not work, seems it's working with something around 700px and still not everything of the route is displayed. is any way to fix this?

Azimuth mirrored after remapping axis coordinate system

I am trying to remap a device that has an alternate coordinate system.
The sensor is reporting values that are rotated 90° around the X axis. The format is a Quaternion in standard Android Rotation Vector notation. If I use the data unmodified I can hold the device 90° offset and successfully call getOrientation via:
private void updateOrientationFromVector(float[] rotationVector) {
float[] rotationMatrix = new float[9];
SensorManager.getRotationMatrixFromVector(rotationMatrix, rotationVector);
final int worldAxisForDeviceAxisX = SensorManager.AXIS_X;
final int worldAxisForDeviceAxisY = SensorManager.AXIS_Z;
float[] adjustedRotationMatrix = new float[9];
SensorManager.remapCoordinateSystem(rotationMatrix, worldAxisForDeviceAxisX,
worldAxisForDeviceAxisY, adjustedRotationMatrix);
// Transform rotation matrix into azimuth/pitch/roll
float[] orientation = new float[3];
SensorManager.getOrientation(adjustedRotationMatrix, orientation);
// Convert radians to degrees
float azimuth = orientation[0] * 57;
float pitch = orientation[1] * 57;
float roll = orientation[2] * 57;
// Normalize for readability
if(azimuth < 0) {
azimuth = azimuth + 360;
}
Log.d("Orientation", "Azimuth: " + azimuth + "° Pitch: " + pitch + "° Roll: " + roll + "°);
}
This code works fine for all normal Android devices. If I hold a reference phone in front of me as shown, the data is converted properly and shows my correct bearings. But when I use this test device, I must hold it at the wrong orientation to show me the correct bearings.
I want to pre-process the data from this test device to rotate the axes so the this device matches all other Android devices. This will let the display logic be generic.
Unfortunately I have tried many different techniques and none are working.
First, I tried to use a the Android calls again:
private fun rotateQuaternionAxes(rotationVector :FloatArray) : FloatArray {
val rotationMatrix = FloatArray(9)
SensorManager.getRotationMatrixFromVector(rotationMatrix, rotationVector)
val worldAxisForDeviceAxisX = SensorManager.AXIS_X
val worldAxisForDeviceAxisY = SensorManager.AXIS_Z
val adjustedRotationMatrix = FloatArray(9)
SensorManager.remapCoordinateSystem(rotationMatrix, worldAxisForDeviceAxisX, worldAxisForDeviceAxisY, adjustedRotationMatrix)
val axisRemappedData = Quaternion.fromRotationMatrix(adjustedRotationMatrix)
val rotationData = floatArrayOf(
axisRemappedData.x,
axisRemappedData.y,
axisRemappedData.z,
axisRemappedData.w
)
return rotationData
}
My private Quaternion.fromRotationMatrix is not show here, and came from euclideanspace.com
When I pre-process my rotation data with this, the logic works for everything, except north and south are swapped! East and west are correct, and my pitch and roll are correct.
So I decided to follow the suggestions for Rotating a Quaternion on 1-Axis with the following code:
private fun rotateQuaternionAxes(rotationVector :FloatArray) : FloatArray {
// https://stackoverflow.com/questions/4436764/rotating-a-quaternion-on-1-axis
// Device X+ is towards power button; Y+ is toward camera; Z+ towards nav buttons
// So rotate the reported data 90 degrees around X and the axes move appropriately
val sensorQuaternion: Quaternion = Quaternion(rotationVector[0], rotationVector[1], rotationVector[2], rotationVector[3])
val manipulationQuaternion = Quaternion.axisAngle(-1.0f, 0.0f, 0.0f, 90.0f)
val axisRemappedData = Quaternion.multiply(sensorQuaternion, manipulationQuaternion)
val rotationData = floatArrayOf(
axisRemappedData.x,
axisRemappedData.y,
axisRemappedData.z,
axisRemappedData.w
)
//LogUtil.debug("Orientation Orig: $sensorQuaternion Rotated: $axisRemappedData")
return rotationData
}
This does the exact same thing! Everything is fine, except north and south are mirrored, leaving east and west correct.
My Quaternion math came from sceneform-android-sdk and I double-checked it against several online sources.
I also tried simply changing my data by just grabbing the same data differently according to Convert quaternion to a different coordinate system.
private fun rotateQuaternionAxes(rotationVector :FloatArray) : FloatArray {
// No change:
//val rotationData = floatArrayOf(x_val, y_val, z_val, w_val)
val x_val = rotationVector[0]
val y_val = rotationVector[1]
val z_val = rotationVector[2]
val w_val = rotationVector[3]
val rotationData = floatArrayOf(x_val, z_val, -y_val, w_val)
return rotationData
}
This was not even close. I played with the axes and ended up finding rotationData = floatArrayOf(-z_val, -x_val, y_val, w_val) was had correct pitch and roll, but the azimuth was completely non-functional. So I've abandoned a simple remapping as an option.
Since the Android remapCoordinateSystem and the quaternion math give the same result, they seem mathematically equivalent. And multiple sources indicate they should accomplish what I'm trying to do.
Can any one explain why remapping my axes would swap the north/south? I believe I am getting a quaternion reflection instead of rotation. There is no physical point on the device that tracks the direction it shows.
Answer
As you said, it looks like you are expecting your data to be on the East-North-Up (ENU) Frame of Reference (FoR) but you are seeing data on an East-Down-North (EDN) FoR. The link you cited to convert quaternion to another coordinate system converts from an ENU to a NDW FoR - which evidently is not what you are looking for.
There are two ways you can solve this. Either use another rotation matrix, or swap your variables. Using another rotation matrix means doing more computation - but if you really want to learn how to do this, you can check out my self-plug introduction to quaternions for reference frame rotations.
The easiest way would be to swap your variables by recognizing that your X axis is not changing, but your expected Y is measured in z' and your expected Z is measured in -y'. Where X,Y,Z are the expected FoR, and x',y',z' are the actual measured FoR. The following "swaps" should allow you to get the same behavior as your other Android devices:
x_expected = x_actual
y_expected = z_actual
z_expected = -y_actual
!!! HOWEVER !!! If your measurements are given in quaternions, then you will have to use a rotation matrix. If your measurements are given as X,Y,Z measurements, you can get away with the swap provided above.
ENU/NED/NDW Notation
East-North-Up and all other similar axes notations are defined by the order of the coordinate system, expressed as X, then Y, and lastly Z, with respect to a Global inertial (static) Frame of Reference. I've defined your expected coordinate system as if you were to lay your phone flat on the ground with the screen of the phone facing the sky and the top of your phone pointing Northward.

Reduce tracking window using google mlkit vision samples

I would like to reduce the reduce bar code tracking window when using the google vision api. There are some answers here but they feel a bit outdated.
I'm using google's sample: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart
Currently, I try to figure out if a barcode is inside my overlay box inside BarcodeScannerProcessor onSuccess callback:
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if(barcodes.isEmpty())
return;
for(barcode in barcodes) {
val center = Point(graphicOverlay.imageWidth / 2, graphicOverlay.imageHeight / 2)
val rectWidth = graphicOverlay.imageWidth * Settings.OverlayWidthFactor
val rectHeight = graphicOverlay.imageHeight * Settings.OverlayHeightFactor
val left = center.x - rectWidth / 2
val top = center.y - rectHeight / 2
val right = center.x + rectWidth / 2
val bottom = center.y + rectHeight / 2
val rect = Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
val contains = rect.contains(barcode.boundingBox!!)
val color = if(contains) Color.GREEN else Color.RED
graphicOverlay.add(BarcodeGraphic(graphicOverlay, barcode, "left: ${barcode.boundingBox!!.left}", color))
}
}
Y-wise it works perfectly, but the X values from barcode.boundingBox e.g. barcode.boundingBox.left seems to have an offset. Is it based on what's being calculated in GraphicOverlay?
I'm expecting the value below to be close to 0, but the offset is about 90 here:
Or perhaps it's more efficient to crop the image according to the box?
Actually the bounding box is correct. The trick is that the image aspect ratio doesn't match the viewport aspect ratio so the image is cropped horizontally. Try to open settings (a gear in the top right corner) and choose an appropriate resolution.
For example take a look at these two screenshots. On the first one the selected resolution (1080x1920) matches my phone resolution so the padding looks good (17px). On the second screenshot the aspect ratio is different (1.0 for 720x720 resolution) therefore the image is cropped and the padding looks incorrect.
So the offset should be transformed from image coordinates to the screen coordinates. Under the hood GraphicOverlay uses a matrix for this transformation. You can use the same matrix:
for(barcode in barcodes) {
barcode.boundingBox?.let { bbox ->
val offset = floatArrayOf(bbox.left.toFloat(), bbox.top.toFloat())
graphicOverlay.transformationMatrix.mapPoints(offset)
val leftOffset = offset[0]
val topOffset = offset[1]
...
}
}
The only thing is that the transformationMatrix is private, so you should add a getter to access it.
As you know, the preview size of the camera is configurable at the settings menu. This configurable size specifies the graphicOverlay dimensions.
On the other hand, the aspect ratio of the CameraSourcePreview (i.e. preview_view in activity_vision_live_preview.xml) which is shown on the screen, does not necessarily equal to the ratio of the graphicOverlay. Because depends on the size of the phone's screen and the height that the parent ConstraintLayout allows occupying.
So, in the preview, based on the difference between the aspect ratio of graphicOverlay and preview_view, some part of the graphicOverlay might not be shown horizontally or vertically.
There are some parameters inside GraphicOverlay that can help us to adjust the left and top of the barcode's boundingBox in such a way that they start from 0 in the visible area.
First of all, they should be accessible out of the GraphicOverlay class. So, it's just enough to write a getter method for them:
GraphicOverlay.java
public class GraphicOverlay extends View {
...
/**
* The factor of overlay View size to image size. Anything in the image coordinates need to be
* scaled by this amount to fit with the area of overlay View.
*/
public float getScaleFactor() {
return scaleFactor;
}
/**
* The number of vertical pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleHeightOffset() {
return postScaleHeightOffset;
}
/**
* The number of horizontal pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleWidthOffset() {
return postScaleWidthOffset;
}
}
Now, it is possible to calculate the left and top difference gap using these parameters like the following:
BarcodeScannerProcessor.kt
class BarcodeScannerProcessor(
context: Context
) : VisionProcessorBase<List<Barcode>>(context) {
...
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if (barcodes.isEmpty()) {
Log.v(MANUAL_TESTING_LOG, "No barcode has been detected")
}
val leftDiff = graphicOverlay.run { postScaleWidthOffset / scaleFactor }.toInt()
val topDiff = graphicOverlay.run { postScaleHeightOffset / scaleFactor }.toInt()
for (i in barcodes.indices) {
val barcode = barcodes[i]
val color = Color.RED
val text = "left: ${barcode.boundingBox!!.left - leftDiff} top: ${barcode.boundingBox!!.top - topDiff}"
graphicOverlay.add(MyBarcodeGraphic(graphicOverlay, barcode, text, color))
logExtrasForTesting(barcode)
}
}
...
}
Visual Result:
Here is the visual result of the output. As it's obvious in the pictures, the gap between both left & top of the barcode and the left and top of the visible area is started from 0. In the case of the left picture, the graphicOverlay is set to the size of 480x640 (aspect ratio ≈ 1.3334) and for the right one 360x640 (aspect ratio ≈ 1.7778). In both cases, on my phone, the CameraSourcePreview has a steady size of 1440x2056 pixels (aspect ratio ≈ 1.4278), so it means that the calculation truly reflected the position of the barcode in the visible area.
(note that the aspect ratio of the visible area in one experiment is lower than that of graphicOverlay, and in another experiment, greater: 1.3334 < 1.4278 < 1.7778. So, the left values and top values are adjusted respectively.)

Combine image with video stream on Android

I am investigating Augmented Reality on Android.
I am using ARCore and Sceneform within an Android application.
I have tried out the sample projects and now would like to develop my own application.
One effect I would like to achieve is to combine/overlay an image (say .jpeg or .png) with a live feed from the devices onboard camera.
The image will have a transparent background that allows the user to see the live feed and image simultaneously
However I do not want the overlayed image to be a fixed/static watermark, When the user zooms in, out or pans the overlayed image must also zoom in, out and pan etc.
I do not wish the overplayed image to become 3d or anything of that nature.
Is this effect possible with Sceneform? or will I need to use other 3rd party libraries and/or tools to achieve the desired results.
UPDATE
The user is drawing on a blank sheet of white paper. The sheet of paper is orientated so that the user is comfortably drawing (either left or right handed). The user is free to move the sheet of paper while they complete their image.
An Android device is held above the sheet of paper filming the user drawing their selected image.
The live camera feed is being cast to a large TV or monitor screen.
To aid the user they have selected a static image to "trace" or "Copy".
This image is chosen on the Android device and is being combined with the live camera stream within the Android application.
The user can zoom in and out on their drawing and the combined live stream and selected static image will also zoom in and out, this will enable the user to make an accurate copy of the selected static image by drawing "Free Hand".
When the user looks directly at the sheet of paper, they only see their drawing.
When the user views the cast live stream of them drawing on the TV or monitor they see their drawing and the chosen static image superimposed. The user can control the transparency of the static image to assist them in making an accurate copy of it.
I think what you are looking for is to use AR to display an image so that the image stays in place, for example over a sheet of paper in order to act as a guide for drawing a copy of the image on the paper.
There are 2 parts to this. First is to locate the sheet of paper, the second is to place the image over the paper and keep it there as the phone moves around.
Locating the sheet of paper can be done just by detecting the plane with the paper (having some contrast, or pattern or something vs. a plain white sheet of paper will help), then tap on where the center of the page should be. This is done in the HelloSceneform sample.
If you want to have a more accurate bounding of the paper, you could tap the 4 corners of the paper, and then create anchors there. To do this register a plane tapped listener in onCreate()
arFragment.setOnTapArPlaneListener(this::onPlaneTapped);
Then in onPlaneTapped, create the 4 anchorNodes. Once you have 4, initialize the drawing to be displayed.
private void onPlaneTapped(HitResult hitResult, Plane plane, MotionEvent event) {
if (cornerAnchors.size() != 4) {
AnchorNode corner = createCornerNode(hitResult.createAnchor());
arFragment.getArSceneView().getScene().addChild(corner);
cornerAnchors.add(corner);
}
if (cornerAnchors.size() == 4 && drawingNode == null) {
initializeDrawing();
}
}
To initialize the drawing, create a Sceneform Texture from the bitmap or drawable. This can be from a resource or a file URL. You want the texture to show the whole image, and scale as the model holding it is resized.
private void initializeDrawing() {
Texture.Sampler sampler = Texture.Sampler.builder()
.setWrapMode(Texture.Sampler.WrapMode.CLAMP_TO_EDGE)
.setMagFilter(Texture.Sampler.MagFilter.NEAREST)
.setMinFilter(Texture.Sampler.MinFilter.LINEAR_MIPMAP_LINEAR)
.build();
Texture.builder()
.setSource(this, R.drawable.logo_google_developers)
.setSampler(sampler)
.build()
.thenAccept(texture -> {
MaterialFactory.makeTransparentWithTexture(this, texture)
.thenAccept(this::buildDrawingRenderable);
});
}
The model to hold the texture is just a flat quad sized to the smallest dimension between the corners. This is the same logic as laying out a quad using OpenGL.
private void buildDrawingRenderable(Material material) {
Integer[] indices = {
0, 1, 3, 3, 1, 2
};
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
float width = Math.abs(max_x - min_x);
float height = Math.abs(max_z - min_z);
float extent = Math.min(width / 2, height / 2);
Vertex[] vertices = {
Vertex.builder()
.setPosition(new Vector3(-extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 1)) // top left
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 1)) // top right
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 0)) // bottom right
.build(),
Vertex.builder()
.setPosition(new Vector3(-extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 0)) // bottom left
.build()
};
RenderableDefinition.Submesh[] submeshes = {
RenderableDefinition.Submesh.builder().
setMaterial(material)
.setTriangleIndices(Arrays.asList(indices))
.build()
};
RenderableDefinition def = RenderableDefinition.builder()
.setSubmeshes(Arrays.asList(submeshes))
.setVertices(Arrays.asList(vertices)).build();
ModelRenderable.builder().setSource(def)
.setRegistryId("drawing").build()
.thenAccept(this::positionDrawing);
}
The last part is to position the quad in the center of the corners, and create a Transformable node so the image can be nudged into position, rotated, or scaled to be the perfect size.
private void positionDrawing(ModelRenderable drawingRenderable) {
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
Vector3 center = new Vector3((min_x + max_x) / 2f,
cornerAnchors.get(0).getWorldPosition().y, (min_z + max_z) / 2f);
Anchor centerAnchor = null;
Vector3 screenPt = arFragment.getArSceneView().getScene().getCamera().worldToScreenPoint(center);
List<HitResult> hits = arFragment.getArSceneView().getArFrame().hitTest(screenPt.x, screenPt.y);
for (HitResult hit : hits) {
if (hit.getTrackable() instanceof Plane) {
centerAnchor = hit.createAnchor();
break;
}
}
AnchorNode centerNode = new AnchorNode(centerAnchor);
centerNode.setParent(arFragment.getArSceneView().getScene());
drawingNode = new TransformableNode(arFragment.getTransformationSystem());
drawingNode.setParent(centerNode);
drawingNode.setRenderable(drawingRenderable);
}
The intended AR reference image can be scaled with ARobjects as points for the sizing of the template for the user.
The more complex AR images will not work easily, since the AR image is overlaid on top of the users tracing, and this will obstruct the tip of their pen/pencil.
My solution is to chromakey the white paper. This will replace the white paper with the chosen image or live feed. Moving the paper around as you specified would be an issue, unless you have a means of tracking the paper position.
As you can see in this example, AR objects are in front, while chromakey is background. Tracing surface (paper) would be in the center.
Reference to this example is on the link below.
RJ
YouTube - AR tracked environment

Figuring out if Anchor is visible in current screen

I'm using ARCore to build my android app, where I allowing users to place anchors. I need to be able to check if the Anchor is in the current frame.
Any idea how can I do it?
Thanks!
I created a method based on camera.worldToScreenPoint(worldPosition). So I can check if a position is visible:
fun com.google.ar.sceneform.Camera.isWorldPositionVisible(worldPosition: Vector3): Boolean {
val var2 = com.google.ar.sceneform.math.Matrix()
com.google.ar.sceneform.math.Matrix.multiply(projectionMatrix, viewMatrix, var2)
val var5: Float = worldPosition.x
val var6: Float = worldPosition.y
val var7: Float = worldPosition.z
val var8 = var5 * var2.data[3] + var6 * var2.data[7] + var7 * var2.data[11] + 1.0f * var2.data[15]
if (var8 < 0f) {
return false
}
val var9 = Vector3()
var9.x = var5 * var2.data[0] + var6 * var2.data[4] + var7 * var2.data[8] + 1.0f * var2.data[12]
var9.x = var9.x / var8
if (var9.x !in -1f..1f) {
return false
}
var9.y = var5 * var2.data[1] + var6 * var2.data[5] + var7 * var2.data[9] + 1.0f * var2.data[13]
var9.y = var9.y / var8
return var9.y in -1f..1f
}
(And I fixed the problem that Anton Stukov said in the comments)
There is a quite simple way to do this. Let's say you have an AnchorNode attached to your anchor.
First, get the node world position:
val worldPosition = node.worldPosition
Second, use scene camera to transform world position into a screen point:
val screenPoint = arFragment.arSceneView.scene.camera.worldToScreenPoint(worldPosition)
Now just check whether the point is inside screen size bounds.
If you're using ARCore, you're probably doing frustum culling where you don't render objects that aren't within the viewable space, an optimization used to stop you from making gl calls to render "unviewable" elements of your scene.
If you have access to the objects after the renderer calculates this, then you can use that value.
Another way you can do this, is by grabbing the Camera and getting the View and Projection matrices. Then you can project the anchor coordinates onto 2D screen coordinates and if the calculated coordinates are outside the screen (ie. x/y values are > or < the screen width/height). You'll have to account for objects that are behind the camera too (the dot product between the camera forward and vector from camera to anchor should be positive).
https://developers.google.com/ar/reference/java/com/google/ar/core/Camera.html.

Categories

Resources