I am drawing ink annotation from points stored in db. Where those points were extracted from previously drawn shape over pdf. I have referred this example given by PDFTron but I am not able to see annotation drawn on page in proper manner.
Actual Image
Drawn Programmatically
Here is the code I have used for drawing annotation.
for (Integer integer : uniqueShapeIds) {
Config.debug("Shape Id's unique "+integer);
pdftron.PDF.Annots.Ink ink = pdftron.PDF.Annots.Ink.create(
mPDFViewCtrl.getDoc(),
getAnnotationRect(pointsArray, integer));
for (SaveAnnotationState annot : pointsArray) {
Config.debug("Draw "+annot.getxCord()+" "+annot.getyCord()+" "+annot.getPathIndex()+" "+annot.getPointIndex());
Point pt = new Point(annot.getxCord(), annot.getyCord());
ink.setPoint(annot.getPathIndex(), annot.getPointIndex(),pt);
ink.setColor(
new ColorPt(annot.getR()/255, annot.getG()/255, annot
.getB()/255), 3);
ink.setOpacity(annot.getOpacity());
BorderStyle border=ink.getBorderStyle();
border.setWidth(annot.getThickness());
ink.setBorderStyle(border);
}
ink.refreshAppearance();
Page page = mPDFViewCtrl.getDoc().getPage(mPDFViewCtrl.getCurrentPage());
Annot mAnnot=ink;
page.annotPushBack(mAnnot);
mPDFViewCtrl.update(mAnnot, mPDFViewCtrl.getCurrentPage());
}
can any one tell me what is going wrong here?
On a typical PDF page, the bottom left corner of the page is coordinate 0,0. However, for annotations the origin is the bottom left corner of the rectangle specified in the BBox entry. The BBox entry is the 3rd parameter of you call to Ink.Create, which is called pos unfortunately.
This means the Rect passed into Ink.Create, is supposed to be the minimum axis-aligned bounding box of the all the points that make up the Ink Annot.
I suspect in your call to getAnnotationRect you start with Rect(), which is really Rect(0,0,0,0), so when you union all the other points you end up with an inflated Rect.
What you should do is store the BBox in your database, by calling Annot.getRect().
If this is not possible, or too late, then initialize the Rect with the first point in your database.
Rect rect = new Rect(pt.x, pt.y, pt.x, pt.y);
API:
http://www.pdftron.com/pdfnet/mobile/docs/Android/pdftron
http://www.pdftron.com/pdfnet/mobile/docs/Android/pdftron/PDF/Annot.html#getRect%28%29/PDF/Annot.html#create%28pdftron.SDF.Doc,%20int,%20pdftron.PDF.Rect%29
Related
I'm using Sceneform SDK for Android version
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'
I need my 3D model to be displayed only on the floor. For example I have a 3D cage (simple transparent cuboid) and I need to place this 3D model over real object. For now if my real object has enough big surface the model will be placed at the top of it instead of go over it and I need to avoid behavior.
Here is some code here.
Logic to init ArFragment and display model at the center of the camera. I'm making a HitTest at the center of the my device camera any time when frame is changed.
private fun initArFragment() {
arFragment.arSceneView.scene.addOnUpdateListener {
arFragment.arSceneView?.let { sceneView ->
sceneView.arFrame?.let { frame ->
if (frame.camera.trackingState == TrackingState.TRACKING) {
val hitTest =
frame.hitTest(sceneView.width / 2f, sceneView.height / 2f)
val hitTestIterator = hitTest.iterator()
if (hitTestIterator.hasNext()) {
val hitResult = hitTestIterator.next()
val anchor = hitResult.createAnchor()
if (anchorNode == null) {
anchorNode = AnchorNode()
anchorNode?.setParent(sceneView.scene)
transformableNode =
DragTransformableNode(arFragment.transformationSystem)
transformableNode?.setParent(anchorNode)
boxNode = createBoxNode(.4f, .6f, .4f) // Creating cuboid
boxNode?.setParent(transformableNode)
}
anchorNode?.anchor?.detach()
anchorNode?.anchor = anchor
}
}
}
}
}
}
I think it's expected behavior because HitTest hits on the surface of real object as well. But Don't know how to avoid this behavior.
Is there a way to ignore real objects and place 3D model always at the floor?
UPDATE
I tried to follow #Mick suggestions. I'm trying to group all HitTestResult. When HitTest is done I get list a of all HitResults for all visible planes. I'm grouping them by its rounded Y axis.
Example
{1.35 -> [Y is 1.36776767, Y is 1.35434343, Y is 1.35999999, Y is 1.37723278]}
{1.40 -> [Y is 1.4121212, Y is 1.403232323, Y is 1.44454545, Y is 1.40001011]}
Then for X and Z anchor points I'm using FIRST HitResult with from array sorted keys from MIN to MAX key in the example it's 1.35.
For Y anchor point I'm getting array of a MIN group elements and get it's average value.
val hitResultList = hitTestIterator.asSequence().toList().groupBy { round(it.hitPose.ty() * 20) / 20 }.minBy { it.key }?.value
val hitResult = hitResultList?.first()!!
val averageValueOfY = hitResultList?.map { it.hitPose.ty() }?.average()
createModel(hitResult, averageValueOfY)
Method to create Model
private fun createModel(newHitResult: HitResult, averageValueOfY: Double) {
try {
val newAnchorPose = newHitResult.createAnchor().pose
anchorNode?.anchor?.detach()
anchorNode?.anchor = arFragment.arSceneView.session?.createAnchor(Pose(floatArrayOf(newAnchorPose.tx(),
averageValueOfY.toFloat(), newAnchorPose.tz()), floatArrayOf(0f, 0f, 0f, 0f)))
isArBagModelRendered = true
transformableNode?.select()
} catch (exception: Exception) {
Timber.d(exception)
}
}
This code update helped to get the behaviour which I tried to achieve, but I noticed that sometimes my Y anchor point is underground it looks like MIN plane was detected under the floor :( and I don't know how to fix this issue for now.
Actually I think you possibly have two separate problems you will face for your use case:
object being placed on the top surface as you have highlighted in the question
Occlusion, or not showing the part of the model that should be hidden behind the object, when the model is actually put in the correct place.
A simple solution to the first problem so you can check the second, or maybe more accurately a workaround, might be to simply get the user to place the object in front of the real object, i.e. the case in your example above, and then move it back until it is exactly where they want it.
If you leave the plane highlighting on, i.e. the grid lines which show where a plane is detected, it may be more intuitive for a user to 'hit' the floor also rather than the top of the object.
This would allow you test quickly if the occlusion issue is actually the more serious issue, before you go too much further.
A more complex solution would be to iterate through the planes and experiment with comparing the 'pose' at the centre of each plane to see if you can find reliable way to decide which is the floor - the method is part of the Plane class:
public Pose getCenterPose ()
Returns the pose of the center of the detected plane. The pose's transformed +Y axis will be point normal out of the plane, with the +X and +Z axes orienting the extents of the bounding rectangle.
There are also methods to get the size of the width or depth of the plane if you were sure the floor will always be the biggest plane:
public float getExtentX ()
Returns the length of this plane's bounding rectangle measured along the local X-axis of the coordinate space centered on the plane.
public float getExtentZ ()
Returns the length of this plane's bounding rectangle measured along the local Z-axis of the coordinate frame centered on the plane.
Unfortunately, I don't think there is any existing handy help function like 'get lowest plane', or get 'largest plane' etc.
Note on the occlusion issue, there are frameworks and libraries that do provide some forms of software based occlusion, i.e. without requiring the device to have extra depth sensors, so it may be worth exploring these a little also.
I wanted to find the convex hull in order to even the edges of a hand-drawn triangle on paper. Smoothing using image processing was not enough because i needed to detect this triangle too and a hand drawn triangle tends to have more than three points if the approxPolyDP function is used. A convex hull of a triangle is correctly identified by the approxPolyDP function.
The problem is, i have other shapes in the image too on which a convex hull is created.
Before convex hull is used: Notice the contour labelled 3
After convex hull is used: the end points have been joined and the contour labelled 3 forms a triangle
Now i wanted to somehow exclude contour 3 from being detected as a triangle.
To do this my strategy was to remove this contour altogether from the ArrayList named hullMop. This is because my triangle detection function uses the contours from hullMop and so it wouldnt even check the contour labelled 3.
extcontours are the contours before convex hull is used.
This function checks if a point from hullMop is inside extcontours. If it isn't, then that must be removed from hullMop because they are the extra set of points generated because of the convex hull, or in other words, the red line in the second image.
Now at this point I feel there is a hole in my concept. The openCV documentation says that the convex Hull returns the subset of the points of the original array, in other words, subset of the points of extcontours.
My question is, how do i get the points of the red line created by the convexHull function. I dont want to use findContours because i feel there is a better way.
private void RemoveFalseHullTriangles(ArrayList<MatOfPoint> extcontours, ArrayList<MatOfPoint> hullMop, int width, int height) {
//if every single point of hullmop doesnt touch or isn't inside extcontours, then that point must be the red line
MatOfPoint2f Contours2f = new MatOfPoint2f();
double [] newA = new double[2];
int hullCounter = 0;
A: for(int i =0;i<extcontours.size();i++) {
MatOfPoint ExtCnt = extcontours.get(i);
MatOfPoint HullCnt = hullMop.get(hullCounter);
ExtCnt.convertTo(Contours2f, CvType.CV_32F);
B: for (int j = 0; j < HullCnt.rows(); j++) {
double[] pt = new double[2];
pt[0] = HullCnt.get(j,0)[0];
pt[1] = HullCnt.get(j,0)[1];
if (Math.abs(Imgproc.pointPolygonTest(Contours2f, new Point(pt), true)) > 40) {
//Remove index from HullMop
hullMop.remove(hullCounter);
hullCounter--;
break B;
}
}
hullCounter++;
}
}
Because the hullMop only has a subset of the points of extcontours, i may never know the points of the red line of the contour labelled 3 after convex hull is used.
Is there anyway to get coordinates of that red line generated by convex hull other than using findContours?
As referenced by Alexandar Reynolds, the problem really was detecting open contours first and excluding those contours before finding the convex hull.
The method to find open contours is explained here:
Recognize open and closed shapes opencv
Basically, if an outer contour has no child contour in the hierarchy, then it is an open contour and must be excluded before finding convex hull ( for my case).
I am developing one application in that i want to show call out in center of geometry on my map I am new to arcgis.I tried so much but i am unable to get call out in center,please anybody help me how to solve this problem
my code
SimpleFillSymbol sfs = new SimpleFillSymbol(
Color.RED);
sfs.setAlpha(5);
graphic = new Graphic(feature.getGeometry(),
sfs, feature.getAttributes());
Polygon polygon = (Polygon) graphic
.getGeometry();
int polygonpointscount=polygon.getPointCount();
if(polygonpointscount!=0)
{
pointsize=polygonpointscount/2;
}
Point midpoint = polygon.getPoint(pointsize);
Callout callout = mMapView.getCallout();
if (callout != null
&& callout.isShowing()) {
callout.hide();
}
//Set the content, show the view
callout.setContent(adminsearchloadView(governorate_Name,
area_Name,
block_Number));
callout.setStyle(R.xml.calloutstyle);
callout.setMaxHeight(100000);
callout.setMaxWidth(100000);
callout.refresh();
callout.show(midpoint);
Short answer: use GeometryEngine.getLabelPointForPolygon(Polygon, SpatialReference).
Long answer:
From your code...
int polygonpointscount=polygon.getPointCount();
if(polygonpointscount!=0)
{
pointsize=polygonpointscount/2;
}
Polygon.getPointCount() returns the vertices of the polygon. For example, if the polygon is a rectangle, getPointCount() returns the corners. So your Callout will be at one of the corners instead of at the centroid.
Instead, use GeometryEngine.getLabelPointForPolygon(Polygon, SpatialReference) . It doesn't guarantee to return the centroid, but it returns an interior point that is good for labeling (and it looks like the centroid to me). Make sure you pass a SpatialReference object that tells getLabelPointForPolygon what the spatial reference of your polygon is.
If you must have the centroid, you'll need to create a geoprocessing service based on ArcGIS's Feature to Point tool. However, getLabelPointForPolygon is much easier to implement and faster to execute and probably satisfies your need.
If you use arcgis server you can try "feature to point": http://resources.arcgis.com/en/help/main/10.1/index.html#//00170000003m000000 to calculate centroid layer.
I am working on andengine and i have two sprite one is plate and the other is an apple . My plate sprite move form point 1 to point 2 and my apple sprite is jumping up and down.
Now i want to make apple jump on plate. I tried it with attched child apple with plate but the apple not place on the plate. Apple place below the plate i used zindex but its not working.
Actually problem is to move apple and plate at the same time. Any help would be appriciated. I am stuck with that that why this is happening and what will be solution .Here is my code:
plateDisplay = new Sprite( 250, 300, this.plate, this.getVertexBufferObjectManager());
appleDisplay = new Sprite( 250, 140, this.apple, this.getVertexBufferObjectManager());
plateDisplay.registerEntityModifier(new LoopEntityModifier(new PathModifier(20, path, EaseLinear.getInstance())));
appleDisplay.registerEntityModifier(new LoopEntityModifier(new ParallelEntityModifier(new MoveYModifier(1, appleDisplay.getY(),
(appleDisplay.getY()+70), EaseBounceInOut.getInstance()))));
this.appleDisplay.setZIndex(1);
plateDisplay.setZIndex(0);
plateDisplay.attachChild(this.appleDisplay);
scene.attachChild(plateDisplay);
The issue you are having is that there are different coordinate systems for each object. The Plate sprite has its own X and Y in the scene coordinates. But when you add the apple to the plate object you are now using the plates local coordinates. So if the apple was on the scene's 50,50, when you add it to the plate, it will now be 50,50 as measured from the transform center point of the plate.
There are LocaltoScene and ScenetoLocal coordinate utilities in andengine to help you make this conversion. But underneath they are not super complex - they just add the transforms of all the nested sprites. Both utilites are part of the Sprite class, so you call them from the sprite in question. In your case probably
// Get the scene coordinates of the apple as an array.
float[] coodinates = [appleDisplay.getX(), appleDisplay.getY()];
// Convert the the scene coordinates of the apple to the local corrdinates of the plate.
float[] localCoordinates = plateDisplay.convertSceneToLocalCoordinates(coordinates);
// Attach and set position of apple
appleDisplay.setPosition(localCoordinates[0], localCoordintates[1]);
plateDisplay.attachChild(appleDisplay);
In Android, I have a Path object which I happen to know defines a closed path, and I need to figure out if a given point is contained within the path. What I was hoping for was something along the lines of
path.contains(int x, int y)
but that doesn't seem to exist.
The specific reason I'm looking for this is because I have a collection of shapes on screen defined as paths, and I want to figure out which one the user clicked on. If there is a better way to be approaching this such as using different UI elements rather than doing it "the hard way" myself, I'm open to suggestions.
I'm open to writing an algorithm myself if I have to, but that means different research I guess.
Here is what I did and it seems to work:
RectF rectF = new RectF();
path.computeBounds(rectF, true);
region = new Region();
region.setPath(path, new Region((int) rectF.left, (int) rectF.top, (int) rectF.right, (int) rectF.bottom));
Now you can use the region.contains(x,y) method.
Point point = new Point();
mapView.getProjection().toPixels(geoPoint, point);
if (region.contains(point.x, point.y)) {
// Within the path.
}
** Update on 6/7/2010 **
The region.setPath method will cause my app to crash (no warning message) if the rectF is too large. Here is my solution:
// Get the screen rect. If this intersects with the path's rect
// then lets display this zone. The rectF will become the
// intersection of the two rects. This will decrease the size therefor no more crashes.
Rect drawableRect = new Rect();
mapView.getDrawingRect(drawableRect);
if (rectF.intersects(drawableRect.left, drawableRect.top, drawableRect.right, drawableRect.bottom)) {
// ... Display Zone.
}
The android.graphics.Path class doesn't have such a method. The Canvas class does have a clipping region that can be set to a path, there is no way to test it against a point. You might try Canvas.quickReject, testing against a single point rectangle (or a 1x1 Rect). I don't know if that would really check against the path or just the enclosing rectangle, though.
The Region class clearly only keeps track of the containing rectangle.
You might consider drawing each of your regions into an 8-bit alpha layer Bitmap with each Path filled in it's own 'color' value (make sure anti-aliasing is turned off in your Paint). This creates kind of a mask for each path filled with an index to the path that filled it. Then you could just use the pixel value as an index into your list of paths.
Bitmap lookup = Bitmap.createBitmap(width, height, Bitmap.Config.ALPHA_8);
//do this so that regions outside any path have a default
//path index of 255
lookup.eraseColor(0xFF000000);
Canvas canvas = new Canvas(lookup);
Paint paint = new Paint();
//these are defaults, you only need them if reusing a Paint
paint.setAntiAlias(false);
paint.setStyle(Paint.Style.FILL);
for(int i=0;i<paths.size();i++)
{
paint.setColor(i<<24); // use only alpha value for color 0xXX000000
canvas.drawPath(paths.get(i), paint);
}
Then look up points,
int pathIndex = lookup.getPixel(x, y);
pathIndex >>>= 24;
Be sure to check for 255 (no path) if there are unfilled points.
WebKit's SkiaUtils has a C++ work-around for Randy Findley's bug:
bool SkPathContainsPoint(SkPath* originalPath, const FloatPoint& point, SkPath::FillType ft)
{
SkRegion rgn;
SkRegion clip;
SkPath::FillType originalFillType = originalPath->getFillType();
const SkPath* path = originalPath;
SkPath scaledPath;
int scale = 1;
SkRect bounds = originalPath->getBounds();
// We can immediately return false if the point is outside the bounding rect
if (!bounds.contains(SkFloatToScalar(point.x()), SkFloatToScalar(point.y())))
return false;
originalPath->setFillType(ft);
// Skia has trouble with coordinates close to the max signed 16-bit values
// If we have those, we need to scale.
//
// TODO: remove this code once Skia is patched to work properly with large
// values
const SkScalar kMaxCoordinate = SkIntToScalar(1<<15);
SkScalar biggestCoord = std::max(std::max(std::max(bounds.fRight, bounds.fBottom), -bounds.fLeft), -bounds.fTop);
if (biggestCoord > kMaxCoordinate) {
scale = SkScalarCeil(SkScalarDiv(biggestCoord, kMaxCoordinate));
SkMatrix m;
m.setScale(SkScalarInvert(SkIntToScalar(scale)), SkScalarInvert(SkIntToScalar(scale)));
originalPath->transform(m, &scaledPath);
path = &scaledPath;
}
int x = static_cast<int>(floorf(point.x() / scale));
int y = static_cast<int>(floorf(point.y() / scale));
clip.setRect(x, y, x + 1, y + 1);
bool contains = rgn.setPath(*path, clip);
originalPath->setFillType(originalFillType);
return contains;
}
I know I'm a bit late to the party, but I would solve this problem by thinking about it like determining whether or not a point is in a polygon.
http://en.wikipedia.org/wiki/Point_in_polygon
The math computes more slowly when you're looking at Bezier splines instead of line segments, but drawing a ray from the point still works.
For completeness, I want to make a couple notes here:
As of API 19, there is an intersection operation for Paths. You could create a very small square path around your test point, intersect it with the Path, and see if the result is empty or not.
You can convert Paths to Regions and do a contains() operation. However Regions work in integer coordinates, and I think they use transformed (pixel) coordinates, so you'll have to work with that. I also suspect that the conversion process is computationally intensive.
The edge-crossing algorithm that Hans posted is good and quick, but you have to be very careful for certain corner cases such as when the ray passes directly through a vertex, or intersects a horizontal edge, or when round-off error is a problem, which it always is.
The winding number method is pretty much fool proof, but involves a lot of trig and is computationally expensive.
This paper by Dan Sunday gives a hybrid algorithm that's as accurate as the winding number but as computationally simple as the ray-casting algorithm. It blew me away how elegant it was.
See https://stackoverflow.com/a/33974251/338479 for my code which will do point-in-path calculation for a path consisting of line segments, arcs, and circles.