I am adding an image on vertical plane in Sceneform ARFragment. But it always get rotated. The code is working fine on horizontal plane. My code for placing images on vertical Plane is as follow:
arFragment.setOnTapArPlaneListener { hitResult: HitResult,
plane: Plane,
motionEvent: MotionEvent ->
if(!isOnceTapedOnSurface) {
val anchor = hitResult.createAnchor()
val anchorNode = AnchorNode(anchor)
anchorNode.setParent(arFragment.arSceneView.scene)
andy = TransformableNode(arFragment.transformationSystem)
if(plane.type == Plane.Type.VERTICAL) {
val anchorUp = anchorNode.up
andy.setLookDirection(Vector3.up(), anchorUp)
}
andy.setParent(anchorNode)
andy.renderable = andyRenderable
andy.select()
// arFragment.arSceneView.planeRenderer.isVisible = false
isOnceTapedOnSurface = true
}
}
To fix this issue you can use the above solution. But you should rotate an object using world rotation. Don't use local rotation. We need to zero the rotation value. If you are using local rotation, the object will behave anchor(parent) rotation. So by using world rotation we can control the object.
String planeType = "";
//When tapping on the surface you can get the anchor orientation
if (plane.getType() == Plane.Type.VERTICAL){
planeType = "Vertical";
}else if (plane.getType() == Plane.Type.HORIZONTAL_UPWARD_FACING){
planeType = "Horizontal_Upward";
}else if (plane.getType() == Plane.Type.HORIZONTAL_DOWNWARD_FACING){
planeType = "Horizontal_Downward";
}else {
planeType = "Horizontal";
}```
// First set object world rotation zero
transformableNode.setWorldRotation(Quaternion.axisAngle(new Vector3(0, 0f, 0), 0));
// check plane type is vertical or horizontal if it is vertical below logic will work.
if (planeType.equals("Vertical")) {
Vector3 anchorUp = anchorNode.getUp();
transformableNode.setLookDirection(Vector3.up(), anchorUp);
}
To fix this issue you need to set public Pose getCenterPose(). It returns the pose of the center of the detected plane, defined to have the origin. The pose's transformed +Y axis will be point normal out of the plane, with the +X and +Z axes orienting the extents of the bounding rectangle.
anchor = mySession.createAnchor(plane.getCenterPose())
When the its trackable state is TRACKING, this pose is synced with the latest frame. When its trackable state is PAUSED, an identity pose will be returned.
Your code could be the following:
Anchor newAnchor;
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if(plane.getType() == Plane.Type.VERTICAL &&
plane.getTrackingState() == TrackingState.TRACKING) {
newAnchor = plane.createAnchor(plane.getCenterPose());
break;
}
}
One more thing from Google ARCore software engineers:
Keep objects close to anchors.
When anchoring objects, make sure that they are close to the anchor you are using. Avoid placing objects farther than a few meters from the anchor to prevent unexpected rotational movement due to ARCore's updates to world space coordinates.
If you need to place an object more than a few meters away from an existing anchor, create a new anchor closer to this position and attach the object to the new anchor.
Related
I'm building my app around this Agora ARcore Demo based on Google's hello_ar_java Sample APP.
I have tried using OpenGL and Sceneform togheter with no success: How to draw a line between anchors on the plane with ARcore without arFragment
From what I read I have to use OpenGL to draw a line, using GL_LINES or GL_LINE_STRIP
This is how I am proceeding:
public void onDrawFrame(GL10 gl) {
// Clear screen to notify driver it should not load any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);
try {
// Obtain the current frame from ARSession. When the configuration is set to
// UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
// camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();
// Handle taps. Handling only one tap per frame, as taps are usually low frequency
// compared to frame rate.
MotionEvent tap = queuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
// Creates an anchor if a plane or an oriented point was hit.
if ((trackable instanceof Plane && ((Plane) trackable).isPoseInPolygon(hit.getHitPose()))
|| (trackable instanceof Point
&& ((Point) trackable).getOrientationMode()
== Point.OrientationMode.ESTIMATED_SURFACE_NORMAL)) {
// Hits are sorted by depth. Consider only closest hit on a plane or oriented point.
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (anchors.size() >= 250) {
anchors.get(0).detach();
anchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3D model
// in the correct position relative both to the world and to the plane.
anchors.add(hit.createAnchor());
break;
}
}
}
// Draw background.
mBackgroundRenderer.draw(frame);
// If not tracking, don't draw 3d objects.
if (camera.getTrackingState() == TrackingState.PAUSED) {
return;
}
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();
if (isShowPointCloud()) {
// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);
mPointCloud.draw(viewmtx, projmtx);
// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();
}
// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() == Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
if (isShowPlane()) {
// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class), camera.getDisplayOrientedPose(), projmtx);
}
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space. The Anchor pose is updated
// during calls to session.update() as ARCore refines its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, mScaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
}
sendARViewMessage();
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
The code I find online expects a list of coordinates to draw a line. In my case I have several Anchors that need to be connected, I don't have a list of points.
What is the easiest way to draw line using OpenGL-ES (android)
Somehow I am allowed to pass coordinates, but a line is generated for each Anchor, not connected to the previous one. Or many lines are displayed that are not connected in any way to the Anchors
Any suggestions for passing the coordinates of the tap or Anchors to the Line class?
I am placing a 2D image on vertical wall like a painting hanging on wall. But the rendered image is rotating by random angle. Its not properly aligned. I have integrated following code:
#RequiresApi(api = Build.VERSION_CODES.N)
private void createViewRenderable(WeakReference<ARRoomViewActivity> weakActivity) {
imageViewRenderable = new ImageView(this);
Picasso.get().load(paintingImageUrl)
.networkPolicy(NetworkPolicy.OFFLINE)
.into(imageViewRenderable);
ViewRenderable.builder()
.setView(this, imageViewRenderable)
.build()
.thenAccept(renderable -> {
ARRoomViewActivity activity = weakActivity.get();
if (activity != null) {
activity.renderable = renderable;
}
})
.exceptionally(
throwable -> {
return null;
});
}
private void addToScene(HitResult hitResult) {
if (renderable == null) {
return;
}
Anchor anchor = hitResult.createAnchor();
anchorNode = new AnchorNode(anchor);
anchorNode.setParent(arFragment.getArSceneView().getScene());
node = new Node();
node.setRenderable(renderable);
transformableNode = new TransformableNode(arFragment.getTransformationSystem());
node.setLocalRotation(Quaternion.axisAngle(new Vector3(-1f, 0, 0), -90f));
node.setLookDirection(new Vector3(0, 10f, 0));
transformableNode.setParent(anchorNode);
node.setParent(transformableNode);
transformableNode.select();
}
I am using following Sceneform version and also tried latest 1.17.0 but could not succeed
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.7.0'
The orientation of anchors by default is such that one of the axes points somewhat towards the camera. This results in models placed on the floor to be "looking at" the user. However, models on the wall end up rotating arbitrarily. This can be fixed by setting the look direction of the node.
In your addToScene method, you can introduce an intermediate node that fixes the orientation relative to the anchor node. Children of the intermediate node will then be oriented correctly.
// Create an anchor node that will stay attached to the wall
Anchor anchor = hitResult.createAnchor();
anchorNode = new AnchorNode(anchor);
anchorNode.setParent(arFragment.getArSceneView().getScene());
// Create an intermediate node and orient it to be level by fixing look direction.
// This is needed specifically for nodes on walls.
Node intermediateNode = new Node();
intermediateNode.setParent(anchorNode);
Vector3 anchorUp = anchorNode.getUp();
intermediateNode.setLookDirection(Vector3.up(), anchorUp);
Note: the last two lines in the code sample above are the key to getting the orientation correct.
I'm developing an android app capable of recognizing texts (doing it with Google Vision).
My goal is to wrap the text recognized with an AR (I'm using ARcore) rectangle as soon as it corresponds to a char sequence.
The problem I'm facing is that the text I want to recognize is on a small piece of metal.
It makes it impossible to detect a Plane on it ---> impossible to place the 3D rectangle.
I was wondering if with the coordinates I get from the text detected (I get either the 4 corner points or getboundingbox()) it is possible to create a custom Plane on the metal item in order to display my rectangle.
I've already tried different ways of doing it, and I can't do it.
ArFragment fragment;
Session session = fragment.getArSceneView().getSession();
float[] pos = {0, 0, -1};
float[] rotation = {0, 0, 0, 1};
Anchor anchor = session.createAnchor(new Pose(pos, rotation));
placeObject(fragment, anchor, Uri.parse("model.sfb"));
private void placeObject(ArFragment arFragment, Anchor anchor, Uri uri) {
ModelRenderable.builder()
.setSource(arFragment.getContext(), uri)
.build()
.thenAccept(modelRenderable -> addNodeToScene(arFragment, anchor, modelRenderable))
.exceptionally(throwable -> {
Toast.makeText(arFragment.getContext(), "Error:" + throwable.getMessage(), Toast.LENGTH_LONG).show();
return null;
}
);
}
private void addNodeToScene(ArFragment arFragment, Anchor anchor, ModelRenderable renderable) {
AnchorNode anchorNode = new AnchorNode(anchor);
TransformableNode node = new TransformableNode(arFragment.getTransformationSystem());
node.setRenderable(renderable);
node.setParent(anchorNode);
arFragment.getArSceneView().getScene().addChild(anchorNode);
node.select();
}
I believe ARCore will let you access the feature points that it tracks (for example, if using Unreal, see https://developers.google.com/ar/reference/unreal/arcore/blueprint/Get_All_Trackable_Points). You could search for feature points tracked by ARCore within the area of the image defined by the coordinates you get from the detected text. I believe you can get the pose of tracked points, but I'm not sure if the orientation piece of the pose would have a high confidence without sufficient feature points to infer existence of a plane.
I am investigating Augmented Reality on Android.
I am using ARCore and Sceneform within an Android application.
I have tried out the sample projects and now would like to develop my own application.
One effect I would like to achieve is to combine/overlay an image (say .jpeg or .png) with a live feed from the devices onboard camera.
The image will have a transparent background that allows the user to see the live feed and image simultaneously
However I do not want the overlayed image to be a fixed/static watermark, When the user zooms in, out or pans the overlayed image must also zoom in, out and pan etc.
I do not wish the overplayed image to become 3d or anything of that nature.
Is this effect possible with Sceneform? or will I need to use other 3rd party libraries and/or tools to achieve the desired results.
UPDATE
The user is drawing on a blank sheet of white paper. The sheet of paper is orientated so that the user is comfortably drawing (either left or right handed). The user is free to move the sheet of paper while they complete their image.
An Android device is held above the sheet of paper filming the user drawing their selected image.
The live camera feed is being cast to a large TV or monitor screen.
To aid the user they have selected a static image to "trace" or "Copy".
This image is chosen on the Android device and is being combined with the live camera stream within the Android application.
The user can zoom in and out on their drawing and the combined live stream and selected static image will also zoom in and out, this will enable the user to make an accurate copy of the selected static image by drawing "Free Hand".
When the user looks directly at the sheet of paper, they only see their drawing.
When the user views the cast live stream of them drawing on the TV or monitor they see their drawing and the chosen static image superimposed. The user can control the transparency of the static image to assist them in making an accurate copy of it.
I think what you are looking for is to use AR to display an image so that the image stays in place, for example over a sheet of paper in order to act as a guide for drawing a copy of the image on the paper.
There are 2 parts to this. First is to locate the sheet of paper, the second is to place the image over the paper and keep it there as the phone moves around.
Locating the sheet of paper can be done just by detecting the plane with the paper (having some contrast, or pattern or something vs. a plain white sheet of paper will help), then tap on where the center of the page should be. This is done in the HelloSceneform sample.
If you want to have a more accurate bounding of the paper, you could tap the 4 corners of the paper, and then create anchors there. To do this register a plane tapped listener in onCreate()
arFragment.setOnTapArPlaneListener(this::onPlaneTapped);
Then in onPlaneTapped, create the 4 anchorNodes. Once you have 4, initialize the drawing to be displayed.
private void onPlaneTapped(HitResult hitResult, Plane plane, MotionEvent event) {
if (cornerAnchors.size() != 4) {
AnchorNode corner = createCornerNode(hitResult.createAnchor());
arFragment.getArSceneView().getScene().addChild(corner);
cornerAnchors.add(corner);
}
if (cornerAnchors.size() == 4 && drawingNode == null) {
initializeDrawing();
}
}
To initialize the drawing, create a Sceneform Texture from the bitmap or drawable. This can be from a resource or a file URL. You want the texture to show the whole image, and scale as the model holding it is resized.
private void initializeDrawing() {
Texture.Sampler sampler = Texture.Sampler.builder()
.setWrapMode(Texture.Sampler.WrapMode.CLAMP_TO_EDGE)
.setMagFilter(Texture.Sampler.MagFilter.NEAREST)
.setMinFilter(Texture.Sampler.MinFilter.LINEAR_MIPMAP_LINEAR)
.build();
Texture.builder()
.setSource(this, R.drawable.logo_google_developers)
.setSampler(sampler)
.build()
.thenAccept(texture -> {
MaterialFactory.makeTransparentWithTexture(this, texture)
.thenAccept(this::buildDrawingRenderable);
});
}
The model to hold the texture is just a flat quad sized to the smallest dimension between the corners. This is the same logic as laying out a quad using OpenGL.
private void buildDrawingRenderable(Material material) {
Integer[] indices = {
0, 1, 3, 3, 1, 2
};
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
float width = Math.abs(max_x - min_x);
float height = Math.abs(max_z - min_z);
float extent = Math.min(width / 2, height / 2);
Vertex[] vertices = {
Vertex.builder()
.setPosition(new Vector3(-extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 1)) // top left
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 1)) // top right
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 0)) // bottom right
.build(),
Vertex.builder()
.setPosition(new Vector3(-extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 0)) // bottom left
.build()
};
RenderableDefinition.Submesh[] submeshes = {
RenderableDefinition.Submesh.builder().
setMaterial(material)
.setTriangleIndices(Arrays.asList(indices))
.build()
};
RenderableDefinition def = RenderableDefinition.builder()
.setSubmeshes(Arrays.asList(submeshes))
.setVertices(Arrays.asList(vertices)).build();
ModelRenderable.builder().setSource(def)
.setRegistryId("drawing").build()
.thenAccept(this::positionDrawing);
}
The last part is to position the quad in the center of the corners, and create a Transformable node so the image can be nudged into position, rotated, or scaled to be the perfect size.
private void positionDrawing(ModelRenderable drawingRenderable) {
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
Vector3 center = new Vector3((min_x + max_x) / 2f,
cornerAnchors.get(0).getWorldPosition().y, (min_z + max_z) / 2f);
Anchor centerAnchor = null;
Vector3 screenPt = arFragment.getArSceneView().getScene().getCamera().worldToScreenPoint(center);
List<HitResult> hits = arFragment.getArSceneView().getArFrame().hitTest(screenPt.x, screenPt.y);
for (HitResult hit : hits) {
if (hit.getTrackable() instanceof Plane) {
centerAnchor = hit.createAnchor();
break;
}
}
AnchorNode centerNode = new AnchorNode(centerAnchor);
centerNode.setParent(arFragment.getArSceneView().getScene());
drawingNode = new TransformableNode(arFragment.getTransformationSystem());
drawingNode.setParent(centerNode);
drawingNode.setRenderable(drawingRenderable);
}
The intended AR reference image can be scaled with ARobjects as points for the sizing of the template for the user.
The more complex AR images will not work easily, since the AR image is overlaid on top of the users tracing, and this will obstruct the tip of their pen/pencil.
My solution is to chromakey the white paper. This will replace the white paper with the chosen image or live feed. Moving the paper around as you specified would be an issue, unless you have a means of tracking the paper position.
As you can see in this example, AR objects are in front, while chromakey is background. Tracing surface (paper) would be in the center.
Reference to this example is on the link below.
RJ
YouTube - AR tracked environment
I am trying to move and object by touching it and dragging. I am testing it on my Samsung Galaxy SIII. I have used the following code. For some reason it moves faster than my finger. It should always be beneath my finger. What is wrong? (note: I haven't done the "move object only if you touch onto it" part, so right now it moves where ever I touch).
#pragma strict
var speed : float = 1;
function Start () {
}
function Update () {
if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Moved) {
// Get movement of the finger since last frame
var touchDeltaPosition:Vector2 = Input.GetTouch(0).deltaPosition;
// Move object across XY plane
transform.position = Vector2.Lerp(transform.position,
touchDeltaPosition,
Time.deltaTime*speed);
}
}
this is what i use, it may be a better option for you, it is based on the camera, the reason that your Vector2.Lerp is not working correctly i think is because of your time variable in it you could refer to this Lerp and tweak your 't' variable until it is good for you, or you can try this, this is what i use, also i subtract from x and add to y so my finger isnt over the graphic, best of luck :)
#pragma strict
var speed : float = 1;
var distance : float = 5;
function Start () {
}
function Update () {
if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Moved) {
// Get movement of the finger since last frame
var touchDeltaPosition:Vector2 = Input.GetTouch(0).deltaPosition;
var touchmove : Vector3 = Vector3(touchDeltaPosition.x, touchDeltaPosition.y, distance);
// Move object across XY plane
transform.position = Camera.main.ScreenToWorldPoint(touchmove);
}
}