How to find Center Point from geometry arcgis android? - android

I am developing one application in that i want to show call out in center of geometry on my map I am new to arcgis.I tried so much but i am unable to get call out in center,please anybody help me how to solve this problem
my code
SimpleFillSymbol sfs = new SimpleFillSymbol(
Color.RED);
sfs.setAlpha(5);
graphic = new Graphic(feature.getGeometry(),
sfs, feature.getAttributes());
Polygon polygon = (Polygon) graphic
.getGeometry();
int polygonpointscount=polygon.getPointCount();
if(polygonpointscount!=0)
{
pointsize=polygonpointscount/2;
}
Point midpoint = polygon.getPoint(pointsize);
Callout callout = mMapView.getCallout();
if (callout != null
&& callout.isShowing()) {
callout.hide();
}
//Set the content, show the view
callout.setContent(adminsearchloadView(governorate_Name,
area_Name,
block_Number));
callout.setStyle(R.xml.calloutstyle);
callout.setMaxHeight(100000);
callout.setMaxWidth(100000);
callout.refresh();
callout.show(midpoint);

Short answer: use GeometryEngine.getLabelPointForPolygon(Polygon, SpatialReference).
Long answer:
From your code...
int polygonpointscount=polygon.getPointCount();
if(polygonpointscount!=0)
{
pointsize=polygonpointscount/2;
}
Polygon.getPointCount() returns the vertices of the polygon. For example, if the polygon is a rectangle, getPointCount() returns the corners. So your Callout will be at one of the corners instead of at the centroid.
Instead, use GeometryEngine.getLabelPointForPolygon(Polygon, SpatialReference) . It doesn't guarantee to return the centroid, but it returns an interior point that is good for labeling (and it looks like the centroid to me). Make sure you pass a SpatialReference object that tells getLabelPointForPolygon what the spatial reference of your polygon is.
If you must have the centroid, you'll need to create a geoprocessing service based on ArcGIS's Feature to Point tool. However, getLabelPointForPolygon is much easier to implement and faster to execute and probably satisfies your need.

If you use arcgis server you can try "feature to point": http://resources.arcgis.com/en/help/main/10.1/index.html#//00170000003m000000 to calculate centroid layer.

Related

AR Android Sceneform SDK display model only on the floor

I'm using Sceneform SDK for Android version
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'
I need my 3D model to be displayed only on the floor. For example I have a 3D cage (simple transparent cuboid) and I need to place this 3D model over real object. For now if my real object has enough big surface the model will be placed at the top of it instead of go over it and I need to avoid behavior.
Here is some code here.
Logic to init ArFragment and display model at the center of the camera. I'm making a HitTest at the center of the my device camera any time when frame is changed.
private fun initArFragment() {
arFragment.arSceneView.scene.addOnUpdateListener {
arFragment.arSceneView?.let { sceneView ->
sceneView.arFrame?.let { frame ->
if (frame.camera.trackingState == TrackingState.TRACKING) {
val hitTest =
frame.hitTest(sceneView.width / 2f, sceneView.height / 2f)
val hitTestIterator = hitTest.iterator()
if (hitTestIterator.hasNext()) {
val hitResult = hitTestIterator.next()
val anchor = hitResult.createAnchor()
if (anchorNode == null) {
anchorNode = AnchorNode()
anchorNode?.setParent(sceneView.scene)
transformableNode =
DragTransformableNode(arFragment.transformationSystem)
transformableNode?.setParent(anchorNode)
boxNode = createBoxNode(.4f, .6f, .4f) // Creating cuboid
boxNode?.setParent(transformableNode)
}
anchorNode?.anchor?.detach()
anchorNode?.anchor = anchor
}
}
}
}
}
}
I think it's expected behavior because HitTest hits on the surface of real object as well. But Don't know how to avoid this behavior.
Is there a way to ignore real objects and place 3D model always at the floor?
UPDATE
I tried to follow #Mick suggestions. I'm trying to group all HitTestResult. When HitTest is done I get list a of all HitResults for all visible planes. I'm grouping them by its rounded Y axis.
Example
{1.35 -> [Y is 1.36776767, Y is 1.35434343, Y is 1.35999999, Y is 1.37723278]}
{1.40 -> [Y is 1.4121212, Y is 1.403232323, Y is 1.44454545, Y is 1.40001011]}
Then for X and Z anchor points I'm using FIRST HitResult with from array sorted keys from MIN to MAX key in the example it's 1.35.
For Y anchor point I'm getting array of a MIN group elements and get it's average value.
val hitResultList = hitTestIterator.asSequence().toList().groupBy { round(it.hitPose.ty() * 20) / 20 }.minBy { it.key }?.value
val hitResult = hitResultList?.first()!!
val averageValueOfY = hitResultList?.map { it.hitPose.ty() }?.average()
createModel(hitResult, averageValueOfY)
Method to create Model
private fun createModel(newHitResult: HitResult, averageValueOfY: Double) {
try {
val newAnchorPose = newHitResult.createAnchor().pose
anchorNode?.anchor?.detach()
anchorNode?.anchor = arFragment.arSceneView.session?.createAnchor(Pose(floatArrayOf(newAnchorPose.tx(),
averageValueOfY.toFloat(), newAnchorPose.tz()), floatArrayOf(0f, 0f, 0f, 0f)))
isArBagModelRendered = true
transformableNode?.select()
} catch (exception: Exception) {
Timber.d(exception)
}
}
This code update helped to get the behaviour which I tried to achieve, but I noticed that sometimes my Y anchor point is underground it looks like MIN plane was detected under the floor :( and I don't know how to fix this issue for now.
Actually I think you possibly have two separate problems you will face for your use case:
object being placed on the top surface as you have highlighted in the question
Occlusion, or not showing the part of the model that should be hidden behind the object, when the model is actually put in the correct place.
A simple solution to the first problem so you can check the second, or maybe more accurately a workaround, might be to simply get the user to place the object in front of the real object, i.e. the case in your example above, and then move it back until it is exactly where they want it.
If you leave the plane highlighting on, i.e. the grid lines which show where a plane is detected, it may be more intuitive for a user to 'hit' the floor also rather than the top of the object.
This would allow you test quickly if the occlusion issue is actually the more serious issue, before you go too much further.
A more complex solution would be to iterate through the planes and experiment with comparing the 'pose' at the centre of each plane to see if you can find reliable way to decide which is the floor - the method is part of the Plane class:
public Pose getCenterPose ()
Returns the pose of the center of the detected plane. The pose's transformed +Y axis will be point normal out of the plane, with the +X and +Z axes orienting the extents of the bounding rectangle.
There are also methods to get the size of the width or depth of the plane if you were sure the floor will always be the biggest plane:
public float getExtentX ()
Returns the length of this plane's bounding rectangle measured along the local X-axis of the coordinate space centered on the plane.
public float getExtentZ ()
Returns the length of this plane's bounding rectangle measured along the local Z-axis of the coordinate frame centered on the plane.
Unfortunately, I don't think there is any existing handy help function like 'get lowest plane', or get 'largest plane' etc.
Note on the occlusion issue, there are frameworks and libraries that do provide some forms of software based occlusion, i.e. without requiring the device to have extra depth sensors, so it may be worth exploring these a little also.

Android - google maps gets stuck

I'm developing a App which display a Google map and a bunch of markers on it. There's a lot of markers so I divided them in smaller groups and display only those, which are in some bounds depending on the current position of the camera.
To do that I'm using the GoogleMap.OnCameraIdleListener. First I remove the listener, do my calculations and drawing and then I restore the listener to the Fragment containing my map:
#Override
public void onCameraIdle() {
mMap.setOnCameraIdleListener(null);
clearMap();
findTheMarkersInBounds();
displayTheMarkers();
mMap.setOnCameraIdleListener(this);
}
This way I only draw the markers I need to display and the performance is way better then having 1000 markers on the map at once. I also draw about the same number of polylines but that's not the point now.
For some strange reasons, after some panning and zooming the maps doesn't respond anymore. Can't zoom it nor pan it. App displays a dialog that it is not responding and I should wait or close the app. No erros are displayed in logcat. I can't exactly tell when this happens. Sometimes after the first pan, sometimes I can move around 2-3 minutes. Same thing happens on the emulator and on the physical device.
Anyone experienced something like this? Thanks!
Or am I approaching this the wrong way? How else should I optimize the map to display about 1000 markers and polylines. (The markers have text on them, so it can't be the same Bitmap and all of the polylines can have different colors and need to be clickable, so I can't combine them into one large polyline)
EDIT. A little more info about my methods:
After all the marker positions are loaded from the internal database, I do a for-loop through all of them and based on their position and I place them to the corresponding region. Its an 2D array of lists.
My whole area is divided to 32x32 smaller rectangular areas. When I'm searching for the markers to display, I determine which region is in view and display only those markers, which are in this area.
This way I don't need to loop over all of the markers.
My methods (very simplified) look like this:
ArrayList<MarkerObject> markersToDisplay = new ArrayList<MarkerObject>();
private void findTheMarkersInBounds() {
markersToDisplay.clear();
LatLngBounds bounds = mMap.getProjection().getVisibleRegion().latLngBounds;
int[] regionCoordinates = getRegionCoordinates(bounds); // i, j coordinates of my regions [0..31][0..31]
markersToDisplay.addAll(subdividedMarkers[regionCoordinates[0]][regionCoordinates[1]]);
}
private void drawMarkers() {
if ((markersToDisplay != null) && (markersToDisplay.size() > 0)) {
for (int i=0; i<markersToDisplay.size(); i++) {
MarkerObject mo = markersToDisplay.get(i);
LatLng position = new LatLng(mo.gpsLat, mo.gpsLon);
BitmapDescriptor bitmapDescriptor = BitmapDescriptorFactory.fromBitmap(createMarker(getContext(), mo.title));
GroundOverlay m = mMap.addGroundOverlay(groundOverlayOptions.image(bitmapDescriptor).position(position, 75));
m.setClickable(true);
}
}
}
It is hard to help you without source code of findTheMarkersInBounds() and displayTheMarkers(), but seems, you need different approach to increase performance, for example:
improve your findTheMarkersInBounds() logic if it possible;
runfindTheMarkersInBounds() in separate thread and show not all markers in same time, but one by one (or bunch of 10..20 at one time) during findTheMarkersInBounds() searching;
improve your displayTheMarkers() if it possible, actually may be use custom drawing on canvas (like in this answer) instead of creating thousands Marker objects.
For question updates:
Small improvements (first, because they are used for main):
pass approximately max size of markersToDisplay as constructor parameter:
ArrayList<MarkerObject> markersToDisplay = new ArrayList<MarkerObject>(1000);
Instead for (int i=0; i<markersToDisplay.size(); i++) {
use for (MarkerObject mo: markersToDisplay) {
Do not create LatLng position every time, create it once and store in MarkerObject fields.
Main improvement:
This lines are the source of issues:
BitmapDescriptor bitmapDescriptor = BitmapDescriptorFactory.fromBitmap(createMarker(getContext(), mo.title));
GroundOverlay m = mMap.addGroundOverlay(groundOverlayOptions.image(bitmapDescriptor).position(position, 75));
IMHO using Ground Overlays for thousands of markers showing is bad idea. Ground Overlay is for several "user" maps showing over default Google Map (like local plan of Park or Zoo details). Use custom drawing on canvas like on link above. But if you decide to use Ground Overlays - do not recreate them every time: create it once, store references to them in MarkerObject and reuse:
// once when marker created (just example)
mo.overlayOptions = new GroundOverlayOptions()
.image(BitmapDescriptorFactory.fromBitmap(createMarker(getContext(), mo.title)))
.position(mo.position, 75))
.setClickable(true);
...
// in your drawMarkers() - just add:
...
for (MarkerObject mo: markersToDisplay) {
if (mo.overlayOptions == null) {
mo.overlayOptions = createOverlayOptionsForThisMarker();
}
mMap.addGroundOverlay(mo.overlayOptions)
}
But IMHO - get rid of thousands of Ground Overlays at all - use custom drawing on canvas.
After further investigation and communication with the google maps android tech support we came to a solution. There's a bug in the GroundOverlay.setZIndex() method.
All you have to do is to update to the newest API version. The bug is not present anymore in Google Maps SDK v3.1.
At this moment it is in Beta, but the migration is pretty straightforward.

How to set a polygon over the region surrounding the Route at certain distance (in HereMaps)

I tried getting BoundingBox of route instance and set the polygon over it but the result was a rectangle over the route as shown in the image below which is inappropriate.
Also, I tried to add the BoundingBox of some color with alpha value for transparency over the geocoordinates in the route with a certain distance but the polygons were overlapping and hiding the visibility of Route, like in the image below. Note: (Red circle shows the route which is somewhat visible at certain location due to less overlapping)
I am unable to find any way using which I can merge the multiple polygons into one giant polygon surrounding the route like in 2nd image.
Below is my code which provided me the results in the 2nd image.
fun addBoundingBoxTo(center: GeoCoordinate) {
val boundingBox = GeoBoundingBox(center, 1000f, 1000f)
val coordinates: MutableList<GeoCoordinate> = ArrayList()
coordinates.add(boundingBox.topLeft)
coordinates.add(GeoCoordinate(boundingBox.topLeft.latitude,
boundingBox.bottomRight.longitude,
boundingBox.topLeft.altitude))
coordinates.add(boundingBox.bottomRight)
coordinates.add(GeoCoordinate(boundingBox.bottomRight.latitude,
boundingBox.topLeft.longitude, boundingBox.topLeft.altitude))
val geoPolygon = GeoPolygon(coordinates)
val polygon = MapPolygon(geoPolygon)
polygon.fillColor = Color.parseColor("#77777777")
polygon.lineWidth = 0
map.addMapObject(polygon)
}
route.routeGeometry.forEach {
addBoundingBoxTo(it)
}
The desired result I want to achieve is like in the below image:
Any help would be appreciated. Thanks!
With the latest December release, can now also specify up to 20 polygons in a routing request. This allows a more precise and simple way to define areas that should be avoided.

How can I encode a circle in android maps?

I know that we can encode a polygon in android like this:
encoded_string = polyutil.encode(polygon.getPoints());
But how can we get a encoded string for a circle?
I don't know what polyutil is or what its encode method does, but a circle's equivalent of a polygon's points is the center and radius. It shouldn't be too hard to call getCenter() and getRadius() on the circle and make a string with that data in it.
https://developer.android.com/reference/com/google/android/gms/maps/model/Circle.html
You may be able to use Java Topology Suite to do it.
http://mvnrepository.com/artifact/com.vividsolutions/jts/1.11
Coordinate center = new Coordinate(entity.getLongitude(), entity.getLatitude());
GeometricShapeFactory gsf = new GeometricShapeFactory();
gsf.setCentre(center);
gsf.setNumPoints(20);
gsf.setSize(10.2);
Polygon poly = gsf.createCircle();
Coordinate[] coordArray = poly.getCoordinates();
Not sure if this is what you want, but it'll give you an array of Coordinates on the perimeter of the circle, so it may be worth playing around with.

Unity2D Android Touch misbehaving

I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html

Categories

Resources