compute bounds of a path crash - android

I have closed Path-s, which consist of many Bezier segments. These Bezier segments have integer coordinates up to 5000,5000. I need to compute if a point is inside one of these closed Paths. I use this code:
// p is a Path, bounds is a RectF
p.computeBounds(bounds, true);
Region region = new android.graphics.Region();
region.setPath(path, new android.graphics.Region((int)bounds.left, (int)bounds.top, (int)bounds.right, (int)bounds.bottom));
I do that once per Path and then do
region.contains(x, y);
Problem is, that the computeBounds crashes the app for my big Paths. There is no force close, it just receives SIGSEGV and returns to home screen, with no message. I tried to downscale the coordinates to smaller number (divided by 1000), but it did not help, program still crashes.
Is there any other way to compute if a point is inside a complex Path, which will not crash?
EDIT
Is there a way to compute this with RenderScript? I cannot find any RenderScript examples with paths/Bezier curves...
EDIT 2
This happens in Nexus 7 with 4.1.1 and also 4.1.2 and also in ICS x86 tablet emulator

Normally Java code results in exception rather than segmentation fault, meaning there is something wrong with Java Virtual Machine, unless you have own JNI code in your project and that is causing segmentation fault.
Instead of computing path's bounds, which seems to be too expensive operation for your complex path you can use huge enough clip rectangle to bound all possible paths as clip region so you can avoid calling heavy and unnecessary Path.computeBounds.
import android.graphics.Region;
import android.graphics.Path;
import android.graphics.RectF;
import android.graphics.Rect;
private static final String id = "Graphics";
...
Path path = new Path();
/* Initialize path here... */
/* Huge rectangle to bound all possible paths */
Region clip = new Region(0, 0, 10000, 10000);
/* Define the region */
Region region = new Region();
if (region.setPath(path, clip)) {
Log.d(id, "This region is fine");
} else {
Log.e(id, "This region is empty");
}

Related

Why does Canvas::drawArc method draw incorrect angles?

I'm trying to draw a 180° arc within an android view.
In the onDraw method I'm using Canvas::drawArc like this:
canvas.drawArc(arcRect, 180.0f, 180.0f, false, arcBackgroundPaint);
The Paint that is being used has a strokeCap of type BUTT. As you can see in the image below, the ends of the arc do not look quite right. The ends are angled up slightly from the inner diameter to the outer diameter.
Does anyone know why this is happening? Is there a way to fix it other than changing the values passed to drawArc so that it actually draws more than 180 degrees? That seems like a nasty hack that I'd rather avoid.
UPDATE:
As requested, I added code to draw a straight line at the bottom of the arc. I draw the line before I draw the arc so the line is behind. I also made the arc black to get better contrast with the line. Here is the outcome:
Yuck.
Here's one with anti-aliasing turned off for the line and arc...
drawArc incorrect angles problem will occur if hardware acceleration is on.
Reproduced with real phone (Nexus 5, Android 6.0.1).
How to test:
Add android:hardwareAccelerated="false" to application tag in AndroidManifest.xml.
NB:
First supported API level of drawArc() is not found in the document, but scaling function of it is from API17.
Hardware Acceleration > Canvas Scaling
https://developer.android.com/guide/topics/graphics/hardware-accel.html
First supported API level
Simple Shapes: 17
Note: 'Simple' shapes are drawRect(), drawCircle(), drawOval(), drawRoundRect(), and drawArc() (with useCenter=false) ...
Updated:
As PICyourBrain wrote in another answer, drawArc() is a part of reasons this problem occurs.
With drawPath(), this problem does not occur.
For information:
Not a same shape but drawArc() with useCenter=true also seems safe to use, as hardware acceleration is not used for it. (Ends of the arc and center of it are connected with line, and the line seems straight.)
Canvas Scaling (same link above)
First supported API level
Complex Shapes: x
Other Q&As related to drawArc()
Canvas.drawArc() artefacts
What's this weird drawArc() / arcTo() bug (graphics glitch) in Android?
[Old]
I'll leave these for your information.
Here are some test results I've tried.
Updated 1-1:
With emulator Nexus5 API10, with arcRect = new RectF(100, 100, 1500, 1400);.
With emulator Nexus5 API16, this does not happen.
(The difference is existence of (host side) hardware acceleration.)
I thought this seems to be an anti-alias related problem, but this happens no matter arcBackgroundPaint.setAntiAlias(true) or setDither(true) are set or not.
NB: This is caused by typo, sorry. Aspect ratio of arcRect should be 1:1 for this test.
With arcRect = new RectF(100, 100, 1400, 1400); and arcPenWidth = 200f;
With arcRect = new RectF(100, 100, 1500, 1300);
1-2: for comparison
With emulator NexusOne (480x800, HDPI) API10, with arcRect = new RectF(100, 100, 500, 500);
Updated 2: Line extended.
I first thought drawing out of view may cause this, but this is also a emulator bug. (Not a drawArc() specific behavior.)
With emulator API10 in landscape(horizontal) orientation, this occurs.
Calculation of line position seems broken.
Please see the right end of the straight line.
final float lineStartX = arcRect.left - 250f;
final float lineEndX = arcRect.right + 250f;
emulator Nexus5 API10
horizontal (API10)
vertical (API10)
2-2: Just for a information
View odd behavior sample of view position (or drawing position) out of range.
Updated 3: Emulator bug
Please see the bottom of the image.
Blue line is a background image of desktop.
emulator Nexus5 API10
Update 4: The result seems to depend on style.
With title bar
Without title bar
Update 5: The result seems to depend on line width.
With arcPenWidth = 430f (API10, horizontal)
Slight notch on the right side is seen.
With 440f
With 450f
Here's my (first) test code.
import android.content.Context;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.RectF;
import android.util.AttributeSet;
import android.view.View;
final class TestView extends View
{
RectF arcRect;
Paint arcBackgroundPaint;
Paint linePaint;
public TestView(final Context context, final AttributeSet attrs)
{
super(context, attrs);
//arcRect = new RectF(100, 100, 500, 500);
arcRect = new RectF(100, 100, 1500, 1500); // fixed (old: 1500, 1400)
arcBackgroundPaint = new Paint();
arcBackgroundPaint.setColor(0xFFFFFFFF);
arcBackgroundPaint.setStyle(Paint.Style.STROKE);
arcBackgroundPaint.setStrokeCap(Paint.Cap.BUTT);
arcBackgroundPaint.setStrokeWidth(200f);
linePaint = new Paint();
linePaint.setColor(0xFF00FF00);
linePaint.setStyle(Paint.Style.STROKE);
linePaint.setStrokeCap(Paint.Cap.BUTT);
linePaint.setStrokeWidth(2f);
}
#Override
protected void onDraw(final Canvas canvas)
{
super.onDraw(canvas);
canvas.drawArc(arcRect, 180.0f, 180.0f, false, arcBackgroundPaint);
final float lineStartX = arcRect.left - 50f;
final float lineEndX = arcRect.right + 50f;
final float lineY = arcRect.centerY();
canvas.drawLine(lineStartX, lineY, lineEndX, lineY, linePaint);
}
}
For anyone else who comes across this I did find a workaround. Instead of using Canvas::drawArc, create a Path, add an arc to it and then draw the path on the canvas. Like this:
Path arcPath = new Path();
Rect arcRect = ... // Some rect that will contain the arc
path.addArc(arcRect, 180, 180);
canvas.drawPath(arcPath, backgroundArcPaint);
Then the rendering problem highlighted no longer occurs.
Since my question was not "how do I fix the problem" but rather "why is this a problem" I am not marking this as the correct answer.

Rendering path with Canvas.drawPath() in ICS with hardware acceleration

On ICS device, I tried the following code to draw two rectangles.
Path p1 = new Path();
p1.moveTo(0, 0);
p1.lineTo(0, 100);
p1.lineTo(100, 100);
p1.lineTo(100, 0);
p1.close;
Path p2 = new Path();
Matrix scaling = new Matrix();
scaling.preScale(2, 2);
p1.transform(scaling, p2);
canvas.drawPath(p1);
canvas.drawPath(p2);
Running the above code on ICS device with hardware acceleration enabled (as it is by default), p1 is drawn where as p2 is not.
In general, what happened to me is, as long as a Path is not hand-wired (i.e. by calling lineTo(), quadTo(), etc.), but obtained by copying or transforming (i.e. by calling the copy constructor, transform(matrix, dest), translate(x, y, dest), etc.), it is not drawn.
I found a "widely known" issue that is similar but not exactly the same as my problem: https://groups.google.com/forum/#!msg/android-developers/eTxV4KPy1G4/tAe2zUPCjMcJ
Therefore, can anyone tell me what is the issue I am running into? In my case, I have to resort to path transformation otherwise code complexity will be greatly increase. Thanks!
try setting android:layerType="software" in xml on the view to see if that fixes it. some methods aren't available with hardware acceleration on all apis.
the list is here:
http://developer.android.com/guide/topics/graphics/hardware-accel.html
note that if changing the layer type fixes it, you should create a separate layout for the newer APIs for optimal performance

How to improve OpenCV face detection performance in android?

I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));

Faster Paint on Flex Mobile

I am trying to build an app that tracks touchpoints and draws circles at those points using Flash Builder. The following works perfectly, but after a while, it begins to lag and the touch will be well ahead of the drawn circles. Is there a way of drawing the circles that does not produce lag as more and more of them are added?
In declarations, I have:
<fx:Component className="Circle">
<s:Ellipse>
<s:stroke>
<s:SolidColorStroke alpha="0"/>
</s:stroke>
</s:Ellipse>
</fx:Component>
And this is the drawing function:
var c:Circle = new Circle();
c.x = somex;
c.y = somey;
c.fill = new SolidColor(somecolorint);
c.height = somesize;
c.width = somesize;
c.alpha = 1;
addElement(c);
c = null;
Try taking a look at doing a fullscreen Bitmap created with a BitmapData class. As the touch points are moved, update the bitmap data at the coordinates where the touch occured. Modifying and blitting a screen-sized bitmap is extremely fast and will probably work great for what you're trying to do.
Another performance trade off often done is to make a series of lines instead of continuous circles. You create a new line segment only when a certain distance has been traveled, this lets you limit the number of nodes in the segment thereby keeping performance high.

How can I tell if a closed path contains a given point?

In Android, I have a Path object which I happen to know defines a closed path, and I need to figure out if a given point is contained within the path. What I was hoping for was something along the lines of
path.contains(int x, int y)
but that doesn't seem to exist.
The specific reason I'm looking for this is because I have a collection of shapes on screen defined as paths, and I want to figure out which one the user clicked on. If there is a better way to be approaching this such as using different UI elements rather than doing it "the hard way" myself, I'm open to suggestions.
I'm open to writing an algorithm myself if I have to, but that means different research I guess.
Here is what I did and it seems to work:
RectF rectF = new RectF();
path.computeBounds(rectF, true);
region = new Region();
region.setPath(path, new Region((int) rectF.left, (int) rectF.top, (int) rectF.right, (int) rectF.bottom));
Now you can use the region.contains(x,y) method.
Point point = new Point();
mapView.getProjection().toPixels(geoPoint, point);
if (region.contains(point.x, point.y)) {
// Within the path.
}
** Update on 6/7/2010 **
The region.setPath method will cause my app to crash (no warning message) if the rectF is too large. Here is my solution:
// Get the screen rect. If this intersects with the path's rect
// then lets display this zone. The rectF will become the
// intersection of the two rects. This will decrease the size therefor no more crashes.
Rect drawableRect = new Rect();
mapView.getDrawingRect(drawableRect);
if (rectF.intersects(drawableRect.left, drawableRect.top, drawableRect.right, drawableRect.bottom)) {
// ... Display Zone.
}
The android.graphics.Path class doesn't have such a method. The Canvas class does have a clipping region that can be set to a path, there is no way to test it against a point. You might try Canvas.quickReject, testing against a single point rectangle (or a 1x1 Rect). I don't know if that would really check against the path or just the enclosing rectangle, though.
The Region class clearly only keeps track of the containing rectangle.
You might consider drawing each of your regions into an 8-bit alpha layer Bitmap with each Path filled in it's own 'color' value (make sure anti-aliasing is turned off in your Paint). This creates kind of a mask for each path filled with an index to the path that filled it. Then you could just use the pixel value as an index into your list of paths.
Bitmap lookup = Bitmap.createBitmap(width, height, Bitmap.Config.ALPHA_8);
//do this so that regions outside any path have a default
//path index of 255
lookup.eraseColor(0xFF000000);
Canvas canvas = new Canvas(lookup);
Paint paint = new Paint();
//these are defaults, you only need them if reusing a Paint
paint.setAntiAlias(false);
paint.setStyle(Paint.Style.FILL);
for(int i=0;i<paths.size();i++)
{
paint.setColor(i<<24); // use only alpha value for color 0xXX000000
canvas.drawPath(paths.get(i), paint);
}
Then look up points,
int pathIndex = lookup.getPixel(x, y);
pathIndex >>>= 24;
Be sure to check for 255 (no path) if there are unfilled points.
WebKit's SkiaUtils has a C++ work-around for Randy Findley's bug:
bool SkPathContainsPoint(SkPath* originalPath, const FloatPoint& point, SkPath::FillType ft)
{
SkRegion rgn;
SkRegion clip;
SkPath::FillType originalFillType = originalPath->getFillType();
const SkPath* path = originalPath;
SkPath scaledPath;
int scale = 1;
SkRect bounds = originalPath->getBounds();
// We can immediately return false if the point is outside the bounding rect
if (!bounds.contains(SkFloatToScalar(point.x()), SkFloatToScalar(point.y())))
return false;
originalPath->setFillType(ft);
// Skia has trouble with coordinates close to the max signed 16-bit values
// If we have those, we need to scale.
//
// TODO: remove this code once Skia is patched to work properly with large
// values
const SkScalar kMaxCoordinate = SkIntToScalar(1<<15);
SkScalar biggestCoord = std::max(std::max(std::max(bounds.fRight, bounds.fBottom), -bounds.fLeft), -bounds.fTop);
if (biggestCoord > kMaxCoordinate) {
scale = SkScalarCeil(SkScalarDiv(biggestCoord, kMaxCoordinate));
SkMatrix m;
m.setScale(SkScalarInvert(SkIntToScalar(scale)), SkScalarInvert(SkIntToScalar(scale)));
originalPath->transform(m, &scaledPath);
path = &scaledPath;
}
int x = static_cast<int>(floorf(point.x() / scale));
int y = static_cast<int>(floorf(point.y() / scale));
clip.setRect(x, y, x + 1, y + 1);
bool contains = rgn.setPath(*path, clip);
originalPath->setFillType(originalFillType);
return contains;
}
I know I'm a bit late to the party, but I would solve this problem by thinking about it like determining whether or not a point is in a polygon.
http://en.wikipedia.org/wiki/Point_in_polygon
The math computes more slowly when you're looking at Bezier splines instead of line segments, but drawing a ray from the point still works.
For completeness, I want to make a couple notes here:
As of API 19, there is an intersection operation for Paths. You could create a very small square path around your test point, intersect it with the Path, and see if the result is empty or not.
You can convert Paths to Regions and do a contains() operation. However Regions work in integer coordinates, and I think they use transformed (pixel) coordinates, so you'll have to work with that. I also suspect that the conversion process is computationally intensive.
The edge-crossing algorithm that Hans posted is good and quick, but you have to be very careful for certain corner cases such as when the ray passes directly through a vertex, or intersects a horizontal edge, or when round-off error is a problem, which it always is.
The winding number method is pretty much fool proof, but involves a lot of trig and is computationally expensive.
This paper by Dan Sunday gives a hybrid algorithm that's as accurate as the winding number but as computationally simple as the ray-casting algorithm. It blew me away how elegant it was.
See https://stackoverflow.com/a/33974251/338479 for my code which will do point-in-path calculation for a path consisting of line segments, arcs, and circles.

Categories

Resources