Currently I'm using the version 4.12.x from here map (Navigate edition), I want to display a point from A to B or bounding box but with some padding, I can make the animation however In landscape I have a "loading" screen that takes almost half of the screen so the location and the waypoint/coordinate might not be visible for example point A is in the south and point B is in the north so if the distance is far enough I can't see the waypoints.
Currently I'm trying this:
val orientation = GeoOrientationUpdate(null, 0.0)
val sizeInPixels = Size2D(
binding.hereMapsMapView.width.toDouble() - 100,
binding.hereMapsMapView.height.toDouble() -100
)
val origin = Point2D(1.0, 1.0)
val mapViewport = Rectangle2D(origin, sizeInPixels)
val cameraUpdate =MapCameraUpdateFactory.lookAt(geoBox, orientation, mapViewport)
val animation = MapCameraAnimationFactory.createAnimation(
cameraUpdate,
Duration.ofMillis(CAMERA_ANIMATION_TIME_IN_MILLIS),
EasingFunction.IN_OUT_SINE
)
binding.hereMapsMapView.camera.startAnimation(animation) {
if (it == AnimationState.COMPLETED || it == AnimationState.CANCELLED) {
// complete
}
}
I check the documents but I can't find a lot of info let's say my screen is 2044 px high and my "loading view" is 1200 px high.
I try to do this
val sizeInPixels = Size2D(
binding.hereMapsMapView.width.toDouble() - 100,
binding.hereMapsMapView.height.toDouble() + 1200px
)
But it does not work, seems it's working with something around 700px and still not everything of the route is displayed. is any way to fix this?
Related
I want to show the user location on top of the drawer that I have in my app instead of center of the map since the drawer covers part of the map :
fun centerMap(height: Int, drawerHeight: Int) {
LocationDataProvider.location?.let {
val length = ((height - drawerHeight) / 2) + (drawerHeight - (height / 2))
val point = googleMap.projection.toScreenLocation(LatLng(it.lat, it.lng))
Log.d("Ali", "${point.y.toString()}, ${point.x.toString()}")
val newY = point.y + length
val newLatLng = googleMap.projection.fromScreenLocation(Point(point.x, newY))
moveCamera(newLatLng)
}
}
LocationDataProvider.location returns center of the map. Here is moveCamera method :
fun moveCamera(location: LatLng) {
val cameraUpdate: CameraUpdate = CameraUpdateFactory.newLatLngZoom(
location, 18F
)
googleMap.animateCamera(cameraUpdate)
googleMap.setOnCameraMoveStartedListener(this#MapController)
}
In my logic CenterMap method is called every 5 seconds.The problem is first it zooms somewhere over ocean and then immediately in 5 seconds, it zooms on the desired location (top of the drawer). Here is the log result of centerMap method every 5 seconds ("${point?.y.toString()}, ${point?.x.toString()}"):
D/Ali: 468, 1344
D/Ali: -18826752, 4133
D/Ali: 418, 542
D/Ali: 419, 539
...
As you see as a result of first log, it moves over ocean, from the second log it moves to desired location, and from there, it will be on desired location.
I concluded that when zoom level is not 18f, projection is not working as expected, so if I use following method, and then use projection, it works as expected :
fun centerMap() {
LocationDataProvider.location?.let {
moveCamera(LatLng(it.lat, it.lng))
}
}
That means moveCamera method will be called twice, but is there any solution that projection works as expected without considering zoom level?
Seems you can use map padding GoogleMap.setPadding():
public void setPadding (int left, int top, int right, int bottom)
Sets padding on the map.
This method allows you to define a visible region on the map, to
signal to the map that portions of the map around the edges may be
obscured, by setting padding on each of the four edges of the map. Map
functions will be adapted to the padding. For example, the zoom
controls, compass, copyright notices and Google logo will be moved to
fit inside the defined region, camera movements will be relative to
the center of the visible region, etc.
I'm trying to achieve this 3d pop out of the screen kind of effect using LibGDX on Android (the camera process described at the given link):
https://www.anxious-bored.com/blog/
but the result I get is stretched objects when the eye moves around.
I simulate the eye position using a gyroscope to get device rotation and assume a fixed distance from the device screen. I've also incorporated ARCore for eye tracking, but that also yields the same result.
Here is a GIF of what I'm seeing:
https://i.stack.imgur.com/YOf6n.gif
Anyone have any idea on what I'm doing wrong?
Here is the relevant code I'm using in Kotlin.
private fun updateCamera() {
val camera = // The Perspective camera instance
// Move the camera to the eye position, but keep the same camera direction
camera.view.setToLookAt(camera.direction, camera.up).mul(GDXHelperInstances.matrix4_1.setToTranslation(GDXHelperInstances.vector3_1.set(camera.position).scl(-1f)))
// Get the size of the screen in meters
val deviceHalfHeight = camera.viewportHeight / Gdx.graphics.ppcY * 0.01f
val deviceHalfWidth = camera.viewportWidth / Gdx.graphics.ppcX * 0.01f
// Get the device position and plane relative to the camera
val pos = Vector3(Vector3.Zero).mul(camera.view)
val plane = Plane(GDXHelperInstances.vector3_1.set(camera.direction).scl(-1f), Vector3.Zero)
// Calculate the bounds of the viewport in virtual space (the device screen dimensions in meters with center at Vector3.Zero) relative to the camera (view space)
val left = pos.x - deviceHalfWidth
val right = pos.x + deviceHalfWidth
val bottom = pos.y - deviceHalfHeight
val top = pos.y + deviceHalfHeight
val nearScale = camera.near / plane.distance(camera.position).absoluteValue
// Off-axis projection
camera.projection.setToProjection(left*nearScale, right*nearScale, bottom*nearScale, top*nearScale, camera.near, camera.far)
// Calculate new asymmetrical frustum
camera.combined.set(camera.projection)
Matrix4.mul(camera.combined.`val`, camera.view.`val`)
camera.invProjectionView.set(camera.combined)
Matrix4.inv(camera.invProjectionView.`val`)
camera.frustum.update(camera.invProjectionView)
}
I would like to reduce the reduce bar code tracking window when using the google vision api. There are some answers here but they feel a bit outdated.
I'm using google's sample: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart
Currently, I try to figure out if a barcode is inside my overlay box inside BarcodeScannerProcessor onSuccess callback:
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if(barcodes.isEmpty())
return;
for(barcode in barcodes) {
val center = Point(graphicOverlay.imageWidth / 2, graphicOverlay.imageHeight / 2)
val rectWidth = graphicOverlay.imageWidth * Settings.OverlayWidthFactor
val rectHeight = graphicOverlay.imageHeight * Settings.OverlayHeightFactor
val left = center.x - rectWidth / 2
val top = center.y - rectHeight / 2
val right = center.x + rectWidth / 2
val bottom = center.y + rectHeight / 2
val rect = Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
val contains = rect.contains(barcode.boundingBox!!)
val color = if(contains) Color.GREEN else Color.RED
graphicOverlay.add(BarcodeGraphic(graphicOverlay, barcode, "left: ${barcode.boundingBox!!.left}", color))
}
}
Y-wise it works perfectly, but the X values from barcode.boundingBox e.g. barcode.boundingBox.left seems to have an offset. Is it based on what's being calculated in GraphicOverlay?
I'm expecting the value below to be close to 0, but the offset is about 90 here:
Or perhaps it's more efficient to crop the image according to the box?
Actually the bounding box is correct. The trick is that the image aspect ratio doesn't match the viewport aspect ratio so the image is cropped horizontally. Try to open settings (a gear in the top right corner) and choose an appropriate resolution.
For example take a look at these two screenshots. On the first one the selected resolution (1080x1920) matches my phone resolution so the padding looks good (17px). On the second screenshot the aspect ratio is different (1.0 for 720x720 resolution) therefore the image is cropped and the padding looks incorrect.
So the offset should be transformed from image coordinates to the screen coordinates. Under the hood GraphicOverlay uses a matrix for this transformation. You can use the same matrix:
for(barcode in barcodes) {
barcode.boundingBox?.let { bbox ->
val offset = floatArrayOf(bbox.left.toFloat(), bbox.top.toFloat())
graphicOverlay.transformationMatrix.mapPoints(offset)
val leftOffset = offset[0]
val topOffset = offset[1]
...
}
}
The only thing is that the transformationMatrix is private, so you should add a getter to access it.
As you know, the preview size of the camera is configurable at the settings menu. This configurable size specifies the graphicOverlay dimensions.
On the other hand, the aspect ratio of the CameraSourcePreview (i.e. preview_view in activity_vision_live_preview.xml) which is shown on the screen, does not necessarily equal to the ratio of the graphicOverlay. Because depends on the size of the phone's screen and the height that the parent ConstraintLayout allows occupying.
So, in the preview, based on the difference between the aspect ratio of graphicOverlay and preview_view, some part of the graphicOverlay might not be shown horizontally or vertically.
There are some parameters inside GraphicOverlay that can help us to adjust the left and top of the barcode's boundingBox in such a way that they start from 0 in the visible area.
First of all, they should be accessible out of the GraphicOverlay class. So, it's just enough to write a getter method for them:
GraphicOverlay.java
public class GraphicOverlay extends View {
...
/**
* The factor of overlay View size to image size. Anything in the image coordinates need to be
* scaled by this amount to fit with the area of overlay View.
*/
public float getScaleFactor() {
return scaleFactor;
}
/**
* The number of vertical pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleHeightOffset() {
return postScaleHeightOffset;
}
/**
* The number of horizontal pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleWidthOffset() {
return postScaleWidthOffset;
}
}
Now, it is possible to calculate the left and top difference gap using these parameters like the following:
BarcodeScannerProcessor.kt
class BarcodeScannerProcessor(
context: Context
) : VisionProcessorBase<List<Barcode>>(context) {
...
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if (barcodes.isEmpty()) {
Log.v(MANUAL_TESTING_LOG, "No barcode has been detected")
}
val leftDiff = graphicOverlay.run { postScaleWidthOffset / scaleFactor }.toInt()
val topDiff = graphicOverlay.run { postScaleHeightOffset / scaleFactor }.toInt()
for (i in barcodes.indices) {
val barcode = barcodes[i]
val color = Color.RED
val text = "left: ${barcode.boundingBox!!.left - leftDiff} top: ${barcode.boundingBox!!.top - topDiff}"
graphicOverlay.add(MyBarcodeGraphic(graphicOverlay, barcode, text, color))
logExtrasForTesting(barcode)
}
}
...
}
Visual Result:
Here is the visual result of the output. As it's obvious in the pictures, the gap between both left & top of the barcode and the left and top of the visible area is started from 0. In the case of the left picture, the graphicOverlay is set to the size of 480x640 (aspect ratio ≈ 1.3334) and for the right one 360x640 (aspect ratio ≈ 1.7778). In both cases, on my phone, the CameraSourcePreview has a steady size of 1440x2056 pixels (aspect ratio ≈ 1.4278), so it means that the calculation truly reflected the position of the barcode in the visible area.
(note that the aspect ratio of the visible area in one experiment is lower than that of graphicOverlay, and in another experiment, greater: 1.3334 < 1.4278 < 1.7778. So, the left values and top values are adjusted respectively.)
I am investigating Augmented Reality on Android.
I am using ARCore and Sceneform within an Android application.
I have tried out the sample projects and now would like to develop my own application.
One effect I would like to achieve is to combine/overlay an image (say .jpeg or .png) with a live feed from the devices onboard camera.
The image will have a transparent background that allows the user to see the live feed and image simultaneously
However I do not want the overlayed image to be a fixed/static watermark, When the user zooms in, out or pans the overlayed image must also zoom in, out and pan etc.
I do not wish the overplayed image to become 3d or anything of that nature.
Is this effect possible with Sceneform? or will I need to use other 3rd party libraries and/or tools to achieve the desired results.
UPDATE
The user is drawing on a blank sheet of white paper. The sheet of paper is orientated so that the user is comfortably drawing (either left or right handed). The user is free to move the sheet of paper while they complete their image.
An Android device is held above the sheet of paper filming the user drawing their selected image.
The live camera feed is being cast to a large TV or monitor screen.
To aid the user they have selected a static image to "trace" or "Copy".
This image is chosen on the Android device and is being combined with the live camera stream within the Android application.
The user can zoom in and out on their drawing and the combined live stream and selected static image will also zoom in and out, this will enable the user to make an accurate copy of the selected static image by drawing "Free Hand".
When the user looks directly at the sheet of paper, they only see their drawing.
When the user views the cast live stream of them drawing on the TV or monitor they see their drawing and the chosen static image superimposed. The user can control the transparency of the static image to assist them in making an accurate copy of it.
I think what you are looking for is to use AR to display an image so that the image stays in place, for example over a sheet of paper in order to act as a guide for drawing a copy of the image on the paper.
There are 2 parts to this. First is to locate the sheet of paper, the second is to place the image over the paper and keep it there as the phone moves around.
Locating the sheet of paper can be done just by detecting the plane with the paper (having some contrast, or pattern or something vs. a plain white sheet of paper will help), then tap on where the center of the page should be. This is done in the HelloSceneform sample.
If you want to have a more accurate bounding of the paper, you could tap the 4 corners of the paper, and then create anchors there. To do this register a plane tapped listener in onCreate()
arFragment.setOnTapArPlaneListener(this::onPlaneTapped);
Then in onPlaneTapped, create the 4 anchorNodes. Once you have 4, initialize the drawing to be displayed.
private void onPlaneTapped(HitResult hitResult, Plane plane, MotionEvent event) {
if (cornerAnchors.size() != 4) {
AnchorNode corner = createCornerNode(hitResult.createAnchor());
arFragment.getArSceneView().getScene().addChild(corner);
cornerAnchors.add(corner);
}
if (cornerAnchors.size() == 4 && drawingNode == null) {
initializeDrawing();
}
}
To initialize the drawing, create a Sceneform Texture from the bitmap or drawable. This can be from a resource or a file URL. You want the texture to show the whole image, and scale as the model holding it is resized.
private void initializeDrawing() {
Texture.Sampler sampler = Texture.Sampler.builder()
.setWrapMode(Texture.Sampler.WrapMode.CLAMP_TO_EDGE)
.setMagFilter(Texture.Sampler.MagFilter.NEAREST)
.setMinFilter(Texture.Sampler.MinFilter.LINEAR_MIPMAP_LINEAR)
.build();
Texture.builder()
.setSource(this, R.drawable.logo_google_developers)
.setSampler(sampler)
.build()
.thenAccept(texture -> {
MaterialFactory.makeTransparentWithTexture(this, texture)
.thenAccept(this::buildDrawingRenderable);
});
}
The model to hold the texture is just a flat quad sized to the smallest dimension between the corners. This is the same logic as laying out a quad using OpenGL.
private void buildDrawingRenderable(Material material) {
Integer[] indices = {
0, 1, 3, 3, 1, 2
};
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
float width = Math.abs(max_x - min_x);
float height = Math.abs(max_z - min_z);
float extent = Math.min(width / 2, height / 2);
Vertex[] vertices = {
Vertex.builder()
.setPosition(new Vector3(-extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 1)) // top left
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 1)) // top right
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 0)) // bottom right
.build(),
Vertex.builder()
.setPosition(new Vector3(-extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 0)) // bottom left
.build()
};
RenderableDefinition.Submesh[] submeshes = {
RenderableDefinition.Submesh.builder().
setMaterial(material)
.setTriangleIndices(Arrays.asList(indices))
.build()
};
RenderableDefinition def = RenderableDefinition.builder()
.setSubmeshes(Arrays.asList(submeshes))
.setVertices(Arrays.asList(vertices)).build();
ModelRenderable.builder().setSource(def)
.setRegistryId("drawing").build()
.thenAccept(this::positionDrawing);
}
The last part is to position the quad in the center of the corners, and create a Transformable node so the image can be nudged into position, rotated, or scaled to be the perfect size.
private void positionDrawing(ModelRenderable drawingRenderable) {
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
Vector3 center = new Vector3((min_x + max_x) / 2f,
cornerAnchors.get(0).getWorldPosition().y, (min_z + max_z) / 2f);
Anchor centerAnchor = null;
Vector3 screenPt = arFragment.getArSceneView().getScene().getCamera().worldToScreenPoint(center);
List<HitResult> hits = arFragment.getArSceneView().getArFrame().hitTest(screenPt.x, screenPt.y);
for (HitResult hit : hits) {
if (hit.getTrackable() instanceof Plane) {
centerAnchor = hit.createAnchor();
break;
}
}
AnchorNode centerNode = new AnchorNode(centerAnchor);
centerNode.setParent(arFragment.getArSceneView().getScene());
drawingNode = new TransformableNode(arFragment.getTransformationSystem());
drawingNode.setParent(centerNode);
drawingNode.setRenderable(drawingRenderable);
}
The intended AR reference image can be scaled with ARobjects as points for the sizing of the template for the user.
The more complex AR images will not work easily, since the AR image is overlaid on top of the users tracing, and this will obstruct the tip of their pen/pencil.
My solution is to chromakey the white paper. This will replace the white paper with the chosen image or live feed. Moving the paper around as you specified would be an issue, unless you have a means of tracking the paper position.
As you can see in this example, AR objects are in front, while chromakey is background. Tracing surface (paper) would be in the center.
Reference to this example is on the link below.
RJ
YouTube - AR tracked environment
I have a map with overlays which i want to cache -
on each place the user visited on the map (which is a rectangle area) - i check if i have a cache of the overlays that reside in this rectangle .
In order to improve caching (so if the user was previously on the same rectangle,except that now he is a few meters away from the previous rectangle) - i want to "round" the coordinates.
This way, each time the user is in a rectange - i check if this rectangle is similar to previously cached rectangles and if so i bring the cached result .
Also, if the user is zoomed out and his rectangle is contained within a bigger (previously cached) rectangle - then I also can use the cached rectangle.
Any suggestions ?
If you're just looking at how to group the coordinates, decide on the maximum difference between coordinates in x and y or latitude and longtitude you want. Then there are two ways you can go about grouping them. The first is easier, but it will be slow if you have many points.
Say we have a data structure called cachedPoints, a max distance between related points called maxdistance and a new point we're trying to check to see if it's close to the other called point.
for each cachedPoint in cachedPoints
{
if (point.x - cachedPoint.x < maxdistance)
{
if (point.y - cachedPoint.y < maxdistance)
{
cachedPoint.incrementvisits();
}
}
}
the other way is to use a data structure sorted by x or latitude, and then search to see if there is a cachedpoint with an x or latitude within maxdistance of point, then check the y or longtitude. It'd be a bit faster, but it would take some kind of a hash to implement and adds a bunch of complexity you may not need.
Hopefully that's what you're asking.
If you set up a data structure like:
var a = { 'scales' : [50, 100, 200, 400, 1000],
'cachedRects': [{'location': 'rect-large-1234-5678.png', x: 1234, y: 5678, scale: 3}
{'location': 'rect-small-1240-5685.png', x: 1240, y: 5685, scale: 1} ]
}
you can use the modulo function to do this:
var currentx = GetCurrentX();
var currenty = GetCurrentY();
var currentScale = GetCurrentScale();
var rectFound = false;
foreach(rect in a.cachedRects) {
if (rect.scale === currentScale
&& currentx % a.scales[currentScale] === rect.x
&& currenty % a.scales[currentScale] === rect.y) {
rectFound = true;
useOverlay(rect);
break;
}
}
if(!rectFound) {
//could loop again for a larger rectangle of a lower scale.
}
the above may or may not turn out to be valid JS - I've not tried to run it. I hope you get the gist, anyway.
Hey You can Add Marker in Google map v2 in android.
Here i give code for add marker
MarkerOptions mOpt = new MarkerOptions();
mOpt.position(new LatLng(userHstry.getMyLatlng().latitude, userHstry.getMyLatlng().longitude)); // map.clear();
mOpt.title("Address : " + userHstry.getAddress()).snippet("Date : " + userHstry.getDate() + " , Time : " + userHstry.getTime());
map.addMarker(mOpt);