Show different text for viewrenderable view .(e.g) fl measurement have 10,12,15 but all views are showing 15,15,15. I need to show the text views are 10,12,15. Last values is replacing in all the row. Please help me to fix this issue.
//this is my viewrenderable
ViewRenderable.builder()
.setView(this, R.layout.measure_view)
.build()
.thenAccept(renderable -> measureViewRenderable = renderable).exceptionally(
throwable -> {
Toast toast =
Toast.makeText(this, "Unable to load andy renderable", Toast.LENGTH_LONG);
toast.setGravity(Gravity.CENTER, 0, 0);
toast.show();
return null;
});
//I will show the text message above the line using viewrenderable view
private void addLineBetweenHits(HitResult hitResult, Plane plane, MotionEvent motionEvent) {
Anchor anchor = hitResult.createAnchor();
AnchorNode anchorNode = new AnchorNode(anchor);
if (myanchornode != null) {
anchorNode.setParent(arFragment.getArSceneView().getScene());
point1 = myanchornode.getWorldPosition();
point2 = anchorNode.getWorldPosition();
/*
First, find the vector extending between the two points and define a look rotation
in terms of this Vector.
*/
final Vector3 difference = Vector3.subtract(point1, point2);
final Vector3 directionFromTopToBottom = difference.normalized();
final Quaternion rotationFromAToB =
Quaternion.lookRotation(directionFromTopToBottom, Vector3.up());
node = new Node();
MaterialFactory.makeOpaqueWithColor(getApplicationContext(), new Color(0, 255, 244))
.thenAccept(
material -> {
/* Then, create a rectangular prism, using ShapeFactory.makeCube() and use the difference vector
to extend to the necessary length. */
ModelRenderable model = ShapeFactory.makeCube(
new Vector3(.01f, .01f, difference.length()),
Vector3.zero(), material);
/* Last, set the world rotation of the node to the rotation calculated earlier and set the world position to
the midpoint between the given points . */
node.setParent(anchorNode);
node.setRenderable(model);
node.setWorldPosition(Vector3.add(point1, point2).scaled(.5f));
node.setWorldRotation(rotationFromAToB);
}
);
myanchornode = anchorNode;
TransformableNode andy = new TransformableNode(arFragment.getTransformationSystem());
andy.setParent(anchorNode);
andy.setRenderable(measureViewRenderable);
TextView view=(TextView) measureViewRenderable.getView().findViewById(R.id.txtMeasure);
view.setText("" + fl_measurement.get(fl_measurement.size() - 1));
andy.select();
andy.getScaleController().setEnabled(false);
}
}
I was investigating this same topic, right now it seems we need to create different ViewRenderables by re-using ViewRenderable.builder() in order to have different data in each.
Related
I am placing a 2D image on vertical wall like a painting hanging on wall. But the rendered image is rotating by random angle. Its not properly aligned. I have integrated following code:
#RequiresApi(api = Build.VERSION_CODES.N)
private void createViewRenderable(WeakReference<ARRoomViewActivity> weakActivity) {
imageViewRenderable = new ImageView(this);
Picasso.get().load(paintingImageUrl)
.networkPolicy(NetworkPolicy.OFFLINE)
.into(imageViewRenderable);
ViewRenderable.builder()
.setView(this, imageViewRenderable)
.build()
.thenAccept(renderable -> {
ARRoomViewActivity activity = weakActivity.get();
if (activity != null) {
activity.renderable = renderable;
}
})
.exceptionally(
throwable -> {
return null;
});
}
private void addToScene(HitResult hitResult) {
if (renderable == null) {
return;
}
Anchor anchor = hitResult.createAnchor();
anchorNode = new AnchorNode(anchor);
anchorNode.setParent(arFragment.getArSceneView().getScene());
node = new Node();
node.setRenderable(renderable);
transformableNode = new TransformableNode(arFragment.getTransformationSystem());
node.setLocalRotation(Quaternion.axisAngle(new Vector3(-1f, 0, 0), -90f));
node.setLookDirection(new Vector3(0, 10f, 0));
transformableNode.setParent(anchorNode);
node.setParent(transformableNode);
transformableNode.select();
}
I am using following Sceneform version and also tried latest 1.17.0 but could not succeed
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.7.0'
The orientation of anchors by default is such that one of the axes points somewhat towards the camera. This results in models placed on the floor to be "looking at" the user. However, models on the wall end up rotating arbitrarily. This can be fixed by setting the look direction of the node.
In your addToScene method, you can introduce an intermediate node that fixes the orientation relative to the anchor node. Children of the intermediate node will then be oriented correctly.
// Create an anchor node that will stay attached to the wall
Anchor anchor = hitResult.createAnchor();
anchorNode = new AnchorNode(anchor);
anchorNode.setParent(arFragment.getArSceneView().getScene());
// Create an intermediate node and orient it to be level by fixing look direction.
// This is needed specifically for nodes on walls.
Node intermediateNode = new Node();
intermediateNode.setParent(anchorNode);
Vector3 anchorUp = anchorNode.getUp();
intermediateNode.setLookDirection(Vector3.up(), anchorUp);
Note: the last two lines in the code sample above are the key to getting the orientation correct.
i have some horizontal plane, where i set the start point of line i want to draw.
Now i want to make a vertical line from start point to the center of camera, aka ArFragment.
Current i make it as follow
// create an anchor 1 meter away of current camera position and rotation
Anchor anchor = arFragment.getArSceneView().getSession().createAnchor(
arFragment.getArSceneView().getArFrame().getCamera().getPose()
.compose(Pose.makeTranslation(0, 0, -1f))
.extractTranslation());
// create AnchorNode with position data of Anchor
AnchorNode anchorNode = new AnchorNode(anchor);
// remove anchor to be free change position of AnchorNode
anchorNode.setAnchor(null);
// get local position of new anchor node
Vector3 newLocPos = anchorNode.getLocalPosition();
// get the local position of start node
Vector3 fromVector = fromNode.getLocalPosition();
// since we want to make vertical line, set x and z of new AnchorNode to same as start
newLocPos.x = fromVector.x;
newLocPos.z = fromVector.z;
// set updated position data to new AnchorNode
anchorNode.setLocalPosition(newLocPos);
// set scene as parent to have node in UI
anchorNode.setParent(arFragment.getArSceneView().getScene());
// draw line between Nodes
....
and this works as expected as long as i can hold smartphone vertical.
When i rotate the camera, the line is no more drawed to middle because the created anchor is no more in the middle of screen.
Is there a official or better way to make a vertical line as expected no matter how i hold a smartphone?
UPDATE
now i have reached to create a line by calculate a point of intersection.
private static MyPoint calculateIntersectionPoint(MyPoint A, MyPoint B, MyPoint C, MyPoint D) {
// Line AB represented as a1x + b1y = c1
double a1 = B.y - A.y;
double b1 = A.x - B.x;
double c1 = a1 * (A.x) + b1 * (A.y);
// Line CD represented as a2x + b2y = c2
double a2 = D.y - C.y;
double b2 = C.x - D.x;
double c2 = a2 * (C.x) + b2 * (C.y);
double determinant = a1 * b2 - a2 * b1;
if (determinant == 0) {
// The lines are parallel. This is simplified
// by returning a pair of FLT_MAX
return new MyPoint(Double.MAX_VALUE, Double.MAX_VALUE);
} else {
double x = (b2 * c1 - b1 * c2) / determinant;
double y = (a1 * c2 - a2 * c1) / determinant;
return new MyPoint(x, y);
}
}
MyPoint interceptionPoint = calculateIntersectionPoint(
new MyPoint(line1From.getWorldPosition().x, line1From.getWorldPosition().y),
new MyPoint(line1To.getWorldPosition().x, line1To.getWorldPosition().y),
new MyPoint(line2From.getWorldPosition().x, line2From.getWorldPosition().y),
new MyPoint(line2To.getWorldPosition().x, line2To.getWorldPosition().y));
// get local position of new anchor node
Vector3 newLocPos = anchorNode.getWorldPosition();
// since we want to make vertical line, set x and z of new AnchorNode to same as start
newLocPos.y = (float) interceptionPoint.y;
newLocPos.x = fromVector.x;
newLocPos.z = fromVector.z;
// set updated position data to new AnchorNode
anchorNode.setWorldPosition(newLocPos);
The only problem is that the line must be exactly in the middle (right, left) of smartphone
in other case the line is too big or too small
sphere shows the middle of camera
UPDATE 2
Some more explanation
we set somewhere in xz-room a startpoint of vertical line.
Then we are some away from the line and look at the point where we want the end of line.
Now i look at the problem in 2D and imagine, that my camera start and end point are in same xy-room as line and calculate the endpoint of line.
The Problem
my start and end points of camera view has different z as the line i draw.
To find is the point (and his y-value) on camera view vector that is in same xy-room as line (z of line = z of point)
Has anyone idea?
It looks like you are removing the anchor from the anchorNode as one of your steps - I think this is because you want to move the anchorNode, but I suspect this is the root of your problem.
The code below will draw a line between any two anchorNodes and remain in place when you rotate the camera (allowing for any small error or movement with your device). This is taken from this answer (https://stackoverflow.com/a/52816504/334402) and built into the project linked below so you can check it:
private void drawLine(AnchorNode node1, AnchorNode node2) {
//Draw a line between two AnchorNodes (adapted from https://stackoverflow.com/a/52816504/334402)
Log.d(TAG,"drawLine");
Vector3 point1, point2;
point1 = node1.getWorldPosition();
point2 = node2.getWorldPosition();
//First, find the vector extending between the two points and define a look rotation
//in terms of this Vector.
final Vector3 difference = Vector3.subtract(point1, point2);
final Vector3 directionFromTopToBottom = difference.normalized();
final Quaternion rotationFromAToB =
Quaternion.lookRotation(directionFromTopToBottom, Vector3.up());
MaterialFactory.makeOpaqueWithColor(getApplicationContext(), new Color(0, 255, 244))
.thenAccept(
material -> {
/* Then, create a rectangular prism, using ShapeFactory.makeCube() and use the difference vector
to extend to the necessary length. */
Log.d(TAG,"drawLine insie .thenAccept");
ModelRenderable model = ShapeFactory.makeCube(
new Vector3(.01f, .01f, difference.length()),
Vector3.zero(), material);
/* Last, set the world rotation of the node to the rotation calculated earlier and set the world position to
the midpoint between the given points . */
Anchor lineAnchor = node2.getAnchor();
nodeForLine = new Node();
nodeForLine.setParent(node1);
nodeForLine.setRenderable(model);
nodeForLine.setWorldPosition(Vector3.add(point1, point2).scaled(.5f));
nodeForLine.setWorldRotation(rotationFromAToB);
}
);
}
You can see the full code here: https://github.com/mickod/LineView
I'm developing an android app capable of recognizing texts (doing it with Google Vision).
My goal is to wrap the text recognized with an AR (I'm using ARcore) rectangle as soon as it corresponds to a char sequence.
The problem I'm facing is that the text I want to recognize is on a small piece of metal.
It makes it impossible to detect a Plane on it ---> impossible to place the 3D rectangle.
I was wondering if with the coordinates I get from the text detected (I get either the 4 corner points or getboundingbox()) it is possible to create a custom Plane on the metal item in order to display my rectangle.
I've already tried different ways of doing it, and I can't do it.
ArFragment fragment;
Session session = fragment.getArSceneView().getSession();
float[] pos = {0, 0, -1};
float[] rotation = {0, 0, 0, 1};
Anchor anchor = session.createAnchor(new Pose(pos, rotation));
placeObject(fragment, anchor, Uri.parse("model.sfb"));
private void placeObject(ArFragment arFragment, Anchor anchor, Uri uri) {
ModelRenderable.builder()
.setSource(arFragment.getContext(), uri)
.build()
.thenAccept(modelRenderable -> addNodeToScene(arFragment, anchor, modelRenderable))
.exceptionally(throwable -> {
Toast.makeText(arFragment.getContext(), "Error:" + throwable.getMessage(), Toast.LENGTH_LONG).show();
return null;
}
);
}
private void addNodeToScene(ArFragment arFragment, Anchor anchor, ModelRenderable renderable) {
AnchorNode anchorNode = new AnchorNode(anchor);
TransformableNode node = new TransformableNode(arFragment.getTransformationSystem());
node.setRenderable(renderable);
node.setParent(anchorNode);
arFragment.getArSceneView().getScene().addChild(anchorNode);
node.select();
}
I believe ARCore will let you access the feature points that it tracks (for example, if using Unreal, see https://developers.google.com/ar/reference/unreal/arcore/blueprint/Get_All_Trackable_Points). You could search for feature points tracked by ARCore within the area of the image defined by the coordinates you get from the detected text. I believe you can get the pose of tracked points, but I'm not sure if the orientation piece of the pose would have a high confidence without sufficient feature points to infer existence of a plane.
I'm using Sceneform with ARCore on Android and am unable to understand the concepts clearly with the documentation provided. I'm trying to modify the existing HelloSceneform App from github and trying to create a app, where as soon as it's started, the user sees a 3D object directly at his/her front. This is very similar to what I found https://github.com/google-ar/arcore-unity-sdk/issues/144, but I couldn't figure out how I can improve the existing code to get it.
setContentView(R.layout.activity_ux);
arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.ux_fragment);
ModelRenderable.builder()
.setSource(this, R.raw.andy)
.build()
.thenAccept(modelRenderable -> {
andyRenderable=modelRenderable;
});
arFragment.setOnTapArPlaneListener(
(HitResult hitResult, Plane plane, MotionEvent motionEvent) -> {
Anchor anchor = hitResult.createAnchor();
AnchorNode anchorNode = new AnchorNode(anchor);
anchorNode.setParent(arFragment.getArSceneView().getScene());
TransformableNode andy = new TransformableNode(arFragment.getTransformationSystem());
andy.setParent(anchorNode);
andy.setRenderable(andyRenderable);
andy.select();
});
I just need to disable surface detection, get a Pose object, an anchor and set the object directly without any touch listeners on android, and all that in java code. When I try to create an anchor using a pose, it gives me a NotTrackingException.
session=new Session(this);
...
Pose pose = Pose.makeTranslation(-0.41058916f, -0.6668466f,
Anchor anchor = session.createAnchor(pose);
I hope someone can take their time to help.
#Override
public void onUpdate(FrameTime frameTime) {
Frame frame = playFragment.getArSceneView().getArFrame();
if (frame == null) {
return;
}
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
for (Plane plane : frame.getUpdatedTrackables(Plane.class)) {
playFragment.getPlaneDiscoveryController().hide();
if (plane.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(getScreenCenter().x, getScreenCenter().y)) {
Trackable trackable = hit.getTrackable();
if (trackable instanceof Plane && ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
Anchor anchor = hit.createAnchor();
AnchorNode anchorNode = new AnchorNode(anchor);
anchorNode.setParent(playFragment.getArSceneView().getScene());
Pose pose = hit.getHitPose();
Node node = new Node();
node.setRenderable(modelRenderable);
node.setLocalPosition(new Vector3(pose.tx(), pose.compose(Pose.makeTranslation(0.0f, 0.05f, 0.0f)).ty(), pose.tz()));
node.setParent(anchorNode);
}
}
}
}
}
private Vector3 getScreenCenter() {
View vw = findViewById(android.R.id.content);
return new Vector3(vw.getWidth() / 2f, vw.getHeight() / 2f, 0f);
}
I am bit confused how to tilt an image downwards with the help of TransformableNode in google ARCore Sceneform API. I am using google Sceneform example. I am successfully able to place contents in Screen.
Please take a look at below image how it is currently
However, I want to tilt the facebook icon downwards like earth, which is in the table. I have tried using Node and TransformableNode as stated here, but failed to do so. Can anyone tell me how to do so? Here what I have tried so far.
public class AugmentedImageNodee extends AnchorNode {
private static final String TAG = "AugmentedImageNode";
// The augmented image represented by this node.
private AugmentedImage image;
// Models of the 4 corners. We use completable futures here to simplify
// the error handling and asynchronous loading. The loading is started with the
// first construction of an instance, and then used when the image is set.
// private static CompletableFuture<ModelRenderable> ulCorner;
private static CompletableFuture<ViewRenderable> ulCorner;
private static CompletableFuture<ModelRenderable> urCorner;
private static CompletableFuture<ModelRenderable> lrCorner;
private static CompletableFuture<ModelRenderable> llCorner;
private ArFragment arFragment;
public AugmentedImageNodee(Context context, ArFragment arFragment) {
this.arFragment =arFragment;
// Upon construction, start loading the models for the corners of the frame.
if (ulCorner == null) {
/*=================================================================================*/
/*below is my only layout fb object for rendering, rest are google's one*/
/*=================================================================================*/
ulCorner = ViewRenderable.builder()
.setView(context,R.layout.fb_layout)
.build();
urCorner =
ModelRenderable.builder()
.setSource(context, Uri.parse("models/frame_upper_right.sfb"))
.build();
llCorner =
ModelRenderable.builder()
.setSource(context, Uri.parse("models/frame_lower_left.sfb"))
.build();
lrCorner =
ModelRenderable.builder()
.setSource(context, Uri.parse("models/frame_lower_right.sfb"))
.build();
}
}
/**
* Called when the AugmentedImage is detected and should be rendered. A Sceneform node tree is
* created based on an Anchor created from the image. The corners are then positioned based on the
* extents of the image. There is no need to worry about world coordinates since everything is
* relative to the center of the image, which is the parent node of the corners.
*/
#SuppressWarnings({"AndroidApiChecker", "FutureReturnValueIgnored"})
public void setImage(AugmentedImage image) {
this.image = image;
// If any of the models are not loaded, then recurse when all are loaded.
if (!ulCorner.isDone() || !urCorner.isDone() || !llCorner.isDone() || !lrCorner.isDone()) {
CompletableFuture.allOf(ulCorner, urCorner, llCorner, lrCorner)
.thenAccept((Void aVoid) -> setImage(image))
.exceptionally(
throwable -> {
Log.e(TAG, "Exception loading", throwable);
return null;
});
}
// Set the anchor based on the center of the image.
setAnchor(image.createAnchor(image.getCenterPose()));
/*=================================================================================*/
/*My node for placing the fb*/
/*=================================================================================*/
// Make the 4 corner nodes.
Vector3 localPosition = new Vector3();
TransformableNode cornerNode;
// Upper left corner.
localPosition.set(-0.5f * image.getExtentX(), 0.0f, -0.5f * image.getExtentZ());
cornerNode = new TransformableNode(arFragment.getTransformationSystem());
// cornerNode.setLocalRotation(Quaternion.axisAngle(new Vector3(0f,
// 0f, 0f ), 180));
cornerNode.setParent(this);
cornerNode.setLocalPosition(localPosition);
cornerNode.setRenderable(ulCorner.getNow(null));
/*=================================================================================*/
/*=================================================================================*/
// Upper right corner.
localPosition.set(0.5f * image.getExtentX(), 0.0f, -0.5f * image.getExtentZ());
cornerNode = new TransformableNode(arFragment.getTransformationSystem());
cornerNode.setParent(this);
cornerNode.setLocalPosition(localPosition);
cornerNode.setRenderable(urCorner.getNow(null));
// Lower right corner.
localPosition.set(0.5f * image.getExtentX(), 0.0f, 0.5f * image.getExtentZ());
cornerNode = new TransformableNode(arFragment.getTransformationSystem());
cornerNode.setParent(this);
cornerNode.setLocalPosition(localPosition);
cornerNode.setRenderable(lrCorner.getNow(null));
// Lower left corner.
localPosition.set(-0.5f * image.getExtentX(), 0.0f, 0.5f * image.getExtentZ());
cornerNode = new TransformableNode(arFragment.getTransformationSystem());
cornerNode.setParent(this);
cornerNode.setLocalPosition(localPosition);
cornerNode.setRenderable(llCorner.getNow(null));
}
public AugmentedImage getImage() {
return image;
}
}
Similar to set position cornerNode.setLocalPosition(localPosition); you can set rotation cornerNode.setLocalRotation(new Quaternion(90f, 0f, 0f, -90f));