Matplotlib & Kivy: How to update inset axes xlim and ylim efficiently? - android

I am trying to make a data plotter using matplotlib on a Kivy-based GUI. I have 48 polynoms in same graph as Line2D. Some of them can overlap at many points. Thus I need to have an inset axes to present a zoom option. I am updating a position array with on_touch_move callback and redrawing graph in a scheduled callback using Clock.schedule_interval. The problem is updating inset plot is very slow. It takes 70ms on Windows which is negligible but in android it is very laggy. A reference snippet is below.
`
def plot_graph(self):
self.fig, self.ax = plotter.subplots(1, 1)
self.axin = inset_axes(self.ax, width=4, height=2)
self.fig.tight_layout()
for data in data_set:
line, = self.ax.plot(data)
self.axin.plot(data)
self.add_widget(FigureCanvasKivyAgg(self.fig))
Clock.schedule_interval(self.update_inset, .1) # .2 does not work also
mark_inset(self.ax, self.axin, loc1=1, loc2=3)
def on_touch_move(self, touch):
if self.collide_point(*touch.pos):
point = self.ax.transData.inverted().transform(touch.pos)
self.inset_points = point[0] - 2.5, point[0] + 2.5, point[1] - .25, point[1] + .25
def update_inset(self, *args):
if len(self.inset_points) == 4: # To avoid crashes before graph drawn
self.axin.set_xlim(self.inset_points[0], self.inset_points[1])
self.axin.set_ylim(self.inset_points[2], self.inset_points[3])
self.axin.figure.canvas.draw()
`
I have tried funcAnimation but did not help, since it is not compatible with Kivy. Scheduling seems the best way to update it, but it must be faster, at least for android.

Related

Recognition of handwritten circles, diamonds and rectangles

I looking for some advices about recognition of three handwritten shapes - circles, diamonds and rectangles. I tried diffrent aproaches but they failed so maybe you could point me in another, better direction.
What I tried:
1) Simple algorithm based on dot product between points of handwritten shape and ideal shape. It works not so bad at recognition of rectangle, but failed on circles and diamonds. The problem is that dot product of the circle and diamond is quite similiar even for ideal shapes.
2) Same aproach but using Dynamic Time Warping as measure of simililarity. Similiar problems.
3) Neural networks. I tried few aproaches - giving points data to neural networks (Feedforward and Kohonen) or giving rasterized image. For Kohonen it allways classified all the data (event the sample used to train) into the same category. Feedforward with points was better (but on the same level as aproach 1 and 2) and with rasterized image it was very slow (I needs at least size^2 input neurons and for small sized of raster circle is indistinguishable even for me ;) ) and also without success. I think is because all of this shapes are closed figures? I am not big specialist of ANN (had 1 semester course of them) so maybe I am using them wrong?
4) Saving the shape as Freeman Chain Code and using some algorithms for computing similarity. I though that in FCC the shapes will be realy diffrent from each other. No success here (but I havent explorer this path very deeply).
I am building app for Android with this but I think the language is irrelevant here.
Here's some working code for a shape classifier. http://jsfiddle.net/R3ns3/ I pulled the threshold numbers (*Threshold variables in the code) out of the ether, so of course they can be tweaked for better results.
I use the bounding box, average point in a sub-section, angle between points, polar angle from bounding box center, and corner recognition. It can classify hand drawn rectangles, diamonds, and circles. The code records points while the mouse button is down and tries to classify when you stop drawing.
HTML
<canvas id="draw" width="300" height="300" style="position:absolute; top:0px; left:0p; margin:0; padding:0; width:300px; height:300px; border:2px solid blue;"></canvas>
JS
var state = {
width: 300,
height: 300,
pointRadius: 2,
cornerThreshold: 125,
circleThreshold: 145,
rectangleThreshold: 45,
diamondThreshold: 135,
canvas: document.getElementById("draw"),
ctx: document.getElementById("draw").getContext("2d"),
drawing: false,
points: [],
getCorners: function(angles, pts) {
var list = pts || this.points;
var corners = [];
for(var i=0; i<angles.length; i++) {
if(angles[i] <= this.cornerThreshold) {
corners.push(list[(i + 1) % list.length]);
}
}
return corners;
},
draw: function(color, pts) {
var list = pts||this.points;
this.ctx.fillStyle = color;
for(var i=0; i<list.length; i++) {
this.ctx.beginPath();
this.ctx.arc(list[i].x, list[i].y, this.pointRadius, 0, Math.PI * 2, false);
this.ctx.fill();
}
},
classify: function() {
// get bounding box
var left = this.width, right = 0,
top = this.height, bottom = 0;
for(var i=0; i<this.points.length; i++) {
var pt = this.points[i];
if(left > pt.x) left = pt.x;
if(right < pt.x) right = pt.x;
if(top > pt.y) top = pt.y;
if(bottom < pt.y) bottom = pt.y;
}
var center = {x: (left+right)/2, y: (top+bottom)/2};
this.draw("#00f", [
{x: left, y: top},
{x: right, y: top},
{x: left, y: bottom},
{x: right, y: bottom},
]);
// find average point in each sector (9 sectors)
var sects = [
{x:0,y:0,c:0},{x:0,y:0,c:0},{x:0,y:0,c:0},
{x:0,y:0,c:0},{x:0,y:0,c:0},{x:0,y:0,c:0},
{x:0,y:0,c:0},{x:0,y:0,c:0},{x:0,y:0,c:0}
];
var x3 = (right + (1/(right-left)) - left) / 3;
var y3 = (bottom + (1/(bottom-top)) - top) / 3;
for(var i=0; i<this.points.length; i++) {
var pt = this.points[i];
var sx = Math.floor((pt.x - left) / x3);
var sy = Math.floor((pt.y - top) / y3);
var idx = sy * 3 + sx;
sects[idx].x += pt.x;
sects[idx].y += pt.y;
sects[idx].c ++;
if(sx == 1 && sy == 1) {
return "UNKNOWN";
}
}
// get the significant points (clockwise)
var sigPts = [];
var clk = [0, 1, 2, 5, 8, 7, 6, 3]
for(var i=0; i<clk.length; i++) {
var pt = sects[clk[i]];
if(pt.c > 0) {
sigPts.push({x: pt.x / pt.c, y: pt.y / pt.c});
} else {
return "UNKNOWN";
}
}
this.draw("#0f0", sigPts);
// find angle between consecutive 3 points
var angles = [];
for(var i=0; i<sigPts.length; i++) {
var a = sigPts[i],
b = sigPts[(i + 1) % sigPts.length],
c = sigPts[(i + 2) % sigPts.length],
ab = Math.sqrt(Math.pow(b.x-a.x,2)+Math.pow(b.y-a.y,2)),
bc = Math.sqrt(Math.pow(b.x-c.x,2)+ Math.pow(b.y-c.y,2)),
ac = Math.sqrt(Math.pow(c.x-a.x,2)+ Math.pow(c.y-a.y,2)),
deg = Math.floor(Math.acos((bc*bc+ab*ab-ac*ac)/(2*bc*ab)) * 180 / Math.PI);
angles.push(deg);
}
console.log(angles);
var corners = this.getCorners(angles, sigPts);
// get polar angle of corners
for(var i=0; i<corners.length; i++) {
corners[i].t = Math.floor(Math.atan2(corners[i].y - center.y, corners[i].x - center.x) * 180 / Math.PI);
}
console.log(corners);
// whats the shape ?
if(corners.length <= 1) { // circle
return "CIRCLE";
} else if(corners.length == 2) { // circle || diamond
// difference of polar angles
var diff = Math.abs((corners[0].t - corners[1].t + 180) % 360 - 180);
console.log(diff);
if(diff <= this.circleThreshold) {
return "CIRCLE";
} else {
return "DIAMOND";
}
} else if(corners.length == 4) { // rectangle || diamond
// sum of polar angles of corners
var sum = Math.abs(corners[0].t + corners[1].t + corners[2].t + corners[3].t);
console.log(sum);
if(sum <= this.rectangleThreshold) {
return "RECTANGLE";
} else if(sum >= this.diamondThreshold) {
return "DIAMOND";
} else {
return "UNKNOWN";
}
} else {
alert("draw neater please");
return "UNKNOWN";
}
}
};
state.canvas.addEventListener("mousedown", (function(e) {
if(!this.drawing) {
this.ctx.clearRect(0, 0, 300, 300);
this.points = [];
this.drawing = true;
console.log("drawing start");
}
}).bind(state), false);
state.canvas.addEventListener("mouseup", (function(e) {
this.drawing = false;
console.log("drawing stop");
this.draw("#f00");
alert(this.classify());
}).bind(state), false);
state.canvas.addEventListener("mousemove", (function(e) {
if(this.drawing) {
var x = e.pageX, y = e.pageY;
this.points.push({"x": x, "y": y});
this.ctx.fillStyle = "#000";
this.ctx.fillRect(x-2, y-2, 4, 4);
}
}).bind(state), false);
Given the possible variation in handwritten inputs I would suggest that a neural network approach is the way to go; you will find it difficult or impossible to accurately model these classes by hand. LastCoder's attempt works to a degree, but it does not cope with much variation or have promise for high accuracy if worked on further - this kind of hand-engineered approach was abandoned a very long time ago.
State-of-the-art results in handwritten character classification these days is typically achieved with convolutional neural networks (CNNs). Given that you have only 3 classes the problem should be easier than digit or character classification, although from experience with the MNIST handwritten digit dataset, I expect that your circles, squares and diamonds may occasionally end up being difficult for even humans to distinguish.
So, if it were up to me I would use a CNN. I would input binary images taken from the drawing area to the first layer of the network. These may require some preprocessing. If the drawn shapes cover a very small area of the input space you may benefit from bulking them up (i.e. increasing line thickness) so as to make the shapes more invariant to small differences. It may also be beneficial to centre the shape in the image, although the pooling step might alleviate the need for this.
I would also point out that the more training data the better. One is often faced with a trade-off between increasing the size of one's dataset and improving one's model. Synthesising more examples (e.g. by skewing, rotating, shifting, stretching, etc) or spending a few hours drawing shapes may provide more of a benefit than you could get in the same time attempting to improve your model.
Good luck with your app!
A linear Hough transform of the square or the diamond ought to be easy to recognize. They will both produce four point masses. The square's will be in pairs at zero and 90 degrees with the same y-coordinates for both pairs; in other words, a rectangle. The diamond will be at two other angles corresponding to how skinny the diamond is, e.g. 45 and 135 or else 60 and 120.
For the circle you need a circular Hough transform, and it will produce a single bright point cluster in 3d (x,y,r) Hough space.
Both linear and circular Hough transforms are implemented in OpenCV, and it's possible to run OpenCV on Android. These implementations include thresholding to identify lines and circles. See pg. 329 and pg. 331 of the documentation here.
If you are not familiar with Hough transforms, the Wikipedia page is not bad.
Another algorithm you may find interesting and perhaps useful is given in this paper about polygon similarity. I implemented it many years ago, and it's still around here. If you can convert the figures to loops of vectors, this algorithm could compare them against patterns, and the similarity metric would show goodness of match. The algorithm ignores rotational orientation, so if your definition of square and diamond is with respect to the axes of the drawing surface, you will have to modify the algorithm a bit to differentiate these cases.
What you have here is a fairly standard clasification task, in an arguably vision domain.
You could do this several ways, but the best way isn't known, and can sometimes depend on fine details of the problem.
So, this isn't an answer, per se, but there is a website - Kaggle.com that runs competition for classifications. One of the sample/experiemental tasks they list is reading single hand written numeric digits. That is close enough to this problem, that the same methods are almost certainly going to apply fairly well.
I suggest you go to https://www.kaggle.com/c/digit-recognizer and look around.
But if that is too vague, I can tell you from my reading of it, and playing with that problem space, that Random Forests are a better basic starting place than Neural networks.
In this case (your 3 simple objects) you could try RanSaC-fitting for ellipse (getting the circle) and lines (getting the sides of the rectangle or diamond) - on each connected object if there are several objects to classify at the same time. Based on the actual setting (expected size, etc.) the RanSaC-parameters (how close must a point be to count as voter, how many voters you need at minimun) must be tuned. When you have found a line with RanSaC-fitting, remove the points "close" to it and go for the next line. The angles of the lines should make a distinction between diamand and rectangle easy.
A very simple approach optimized for classifying exactly these 3 objects could be the following:
compute the center of gravity of an object to classify
then compute the distances of the center to the object points as a function on the angle (from 0 to 2 pi).
classify the resulting graph based on the smoothness and/or variance and the position and height of the local maxima and minima (maybe after smoothing the graph).
I propose a way to do it in following steps : -
Take convex hull of the image (consider the shapes being convex)
divide into segments using clustering algorithms
Try to fit a curves or straight line to it and measure & threshold using training set which can be used for classifications
For your application try to divide into 4 clusters .
once you classify clusters as line or curves you can use the info to derive whether curve is circle,rectangle or diamond
I think the answers that are already in place are good, but perhaps a better way of thinking about it is that you should try to break the problem into meaningful pieces.
If possible avoid the problem entirely. For instance if you are recognizing gestures, just analyze the gestures in real time. With gestures you can provide feedback to the user as to how your program interpreted their gesture and the user will change what they are doing appropriately.
Clean up the image in question. Before you do anything come up with an algorithm to try to select what the correct thing is you are trying to analyze. Also use an appropriate filter (convolution perhaps) to remove image artifacts before you begin the process.
Once you have figured out what the thing is you are going to analyze then analyze it and return a score, one for circle, one for noise, one for line, and the last for pyramid.
Repeat this step with the next viable candidate until you come up with the best candidate that is not noise.
I suspect you will find that you don't need a complicated algorithm to find circle, line, pyramid but that it is more so about structuring your code appropriately.
If I was you I'll use already available Image Processing libraries like "AForge".
Take A look at this sample article:
http://www.aforgenet.com/articles/shape_checker
I have a jar on github that can help if you are willing to unpack it and obey the apache license. You can try to recreate it in any other language as well.
Its an edge detector. The best step from there could be to:
find the corners (median of 90 degrees)
find mean median and maximum radius
find skew/angle from horizontal
have a decision agent decide what the shape is
Play around with it and find what you want.
My jar is open to the public at this address. It is not yet production ready but can help.
Just thought I could help. If anyone wants to be a part of the project, please do.
I did this recently with identifying circles (bone centers) in medical images.
Note: Steps 1-2 are if you are grabbing from an image.
Psuedo Code Steps
Step 1. Highlight the Edges
edges = edge_map(of the source image) (using edge detector(s))
(laymens: show the lines/edges--make them searchable)
Step 2. Trace each unique edge
I would (use a nearest neighbor search 9x9 or 25x25) to identify / follow / trace each edge, collecting each point into the list (they become neighbors), and taking note of the gradient at each point.
This step produces: a set of edges.
(where one edge/curve/line = list of [point_gradient_data_structure]s
(laymens: Collect a set of points along the edge in the image)
Step 3. Analyze Each Edge('s points and gradient data)
For each edge,
if the gradient similar for a given region/set of neighbors (a run of points along an edge), then we have a straight line.
If the gradient is changing gradually, we have a curve.
Each region/run of points that is a straight line or a curve, has a mean (center) and other gradient statistics.
Step 4. Detect Objects
We can use the summary information from Step 3 to build conclusions about diamonds, circles, or squares. (i.e. 4 straight lines, that have end points near each other with proper gradients is a diamond or square. One (or more) curves with sufficient points/gradients (with a common focal point) makes a complete circle).
Note: Using an image pyramid can improve algorithm performance, both in terms of results and speed.
This technique (Steps 1-4) would get the job done for well defined shapes, and also could detect shapes that are drawn less than perfectly, and could handle slightly disconnected lines (if needed).
Note: With some machine learning techniques (mentioned by other posters), it could be helpful/important to have good "classifiers" to basically break the problem down into smaller parts/components, so then a decider further down the chain could use to better understand/"see" the objects. I think machine learning might be a little heavy-handed for this question, but still could produce reasonable results. PCA(face detection) could potentially work too.

How to collide objects with high speed in Unity

I try to create game for Android and I have problem with high speed objects, they don't wanna to collide.
I have Sphere with Sphere Collider and Bouncy material, and RigidBody with this param (Gravity=false, Interpolate=Interpolate, Collision Detection = Continuous Dynamic)
Also I have 3 walls with Box Collider and Bouncy material.
This is my code for Sphere
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
In project Settings I set: "Min Penetration For Penalty Force"=0.001, "Solver Interation Count"=50
When I play on the start it work fine (it bounces) but when speed go to high, Sphere just passes the wall.
Can anyone help me?
Thanks.
Edited
var hit : RaycastHit;
var mainGameScript : MainGame;
var particles_splash : GameObject;
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
function Update() {
if (rigidbody.SweepTest(transform.forward, hit, 0.5))
Debug.Log(hit.distance + "mts distance to obstacle");
if(transform.position.y < -3) {
mainGameScript.GameOver();
//Application.LoadLevel("Menu");
}
}
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function OnCollisionEnter(collision : Collision) {
Instantiate(particles_splash, transform.position, transform.rotation);
}
EDITED added more info
Fixed Timestep = 0.02 Maximum Allowed Tir = 0.333
There is no difference between running the game in editor player and on Android
No. It looks OK when I set 0.01
My Paddle is Box Collider without Rigidbody, walls are the same
There are all in same layer (when speed is normal it all works) value in PhysicsManager are the default (same like in image) exept "Solver Interation Co..." = 50
No. When I change speed it pass other wall
I am using standard cube but I expand/shrink it to fit my screen and other objects, when I expand wall more then it's OK it bouncing
No. It's simple project simple example from Video http://www.youtube.com/watch?v=edfd1HJmKPY
I don't use gravity
See:
Similar SO Question
A community script that uses ray tracing to help manage fast objects
UnityAnswers post leading to the script in (2)
You could also try changing the fixed time step for physics. The smaller this value, the more times Unity calculates the physics of a scene. But be warned, making this value too small, say <= 0.005, will likely result in an unstable game, especially on a portable device.
The script above is best for bullets or small objects. You can manually force rigid body collisions tests:
public class example : MonoBehaviour {
public RaycastHit hit;
void Update() {
if (rigidbody.SweepTest(transform.forward, out hit, 10))
Debug.Log(hit.distance + "mts distance to obstacle");
}
}
I think the main problem is the manipulation of Rigidbody's velocity. I would try the following to solve the problem.
Redesign your code to ensure that IncreaseBallVelocity and every other manipulation of Rigidbody is called within FixedUpdate. Check that there are no other manipulations to Transform.position.
Try to replace setting velocity directly by using AddForce or similar methods so the physics engine has a higher chance to calculate all dependencies.
If there are more items (main player character, ...) involved related to the physics calculation, ensure that their code runs in FixedUpdate too.
Another point I stumbled upon were meshes that are scaled very much. Having a GameObject with scale <= 0.01 or >= 100 has definitely a negative impact on physics calculation. According to the docs and this Unity forum entry from one of the gurus you should avoid Transform.scale values != 1
Still not happy? OK then the next test is starting with high velocities but no acceleration. At this phase we want to know, if the high velocity itself or the acceleration is to blame for the problem. It would be interesting to know the velocities' values at which the physics engine starts to fail - please post them so that we can compare them.
EDIT: Some more things to investigate
6.7 m/sec does not sound that much so that I guess there is a special reason or a combination of reasons why things go wrong.
Is your Maximum Allowed Timestep high enough? For testing I suggest 5 to 10x Fixed Timestep. Note that this might kill the frame rate but that can be dfixed later.
Is there any difference between running the game in editor player and on Android?
Did you notice any drops in frame rate because of the 0.01 FixedTimestep? This would indicate that the physics engine might be in trouble.
Could it be that there are static colliders (objects having a collider but no Rigidbody) that are moved around or manipulated otherwise? This would cause heavy recalculations within PhysX.
What about the layers: Are all walls on the same layer resp. are the involved layers are configured appropriately in collision detection matrix?
Does the no-bounce effect always happen at the same wall? If so, can you just copy the 1st wall and put it in place of the second one to see if there is something wrong with this specific wall.
If not to much effort, I would try to set up some standard cubes as walls just to be sure that transform.scale is not to blame for it (I made really bad experience with this).
Do you manipulate gravity or TimeManager.timeScale from within a script?
BTW: are you using gravity? (Should be no problem just

Getting an exception when trying to tile ground in corona Sdk

I'm using the following code:
ground1.x = ground1.x - 10
ground2.x = ground2.x - 10
ground3.x = ground3.x - 10
ground4.x = ground4.x - 10
ground5.x = ground5.x - 10
ground6.x = ground6.x - 10
ground7.x = ground7.x - 10
ground8.x = ground8.x - 10
if(ground1.x < ( 0 - 75 ) ) then
ground1:removeSelf()
ground1 = ground2
ground2 = ground3
ground3 = ground4
ground4 = ground5
ground6 = ground7
ground7 = ground8
local num = math.random ( 1, 4 )
ground8 = display.newImage( group, "normalground"..num..".png", ground7.x + ground7.contentWidth/2, display.contentHeight - 52 )
to animate a moving ground. I'm using 8 tiles, ground1-ground8. This code is inside my animate function that is called on "enterFrame".
What I'm trying to do is detect when "ground1" has moved off the left edge. Then, I'm reassigning the tile ground2 to ground1, ground 3 to ground2, etc, and at the end, creating a new tile and assigning it to ground8.
I did something similar with my background scrolling, which is working fine. However, when I try to run this code, it works for a while (it scrolls the first 4 tiles successfully) but once it tries to assign tile 5 to ground1 and go back through the animation process, I get the following exception:
attempt to perform arithmetic on field 'x' (a nil value)
Any ideas?
You forgot to shift ground6 to down to ground5.
I don't know Corona so I don't know what removeSelf does internally, but I'm guessing it destroys the object and/or removes it's metatable such that x is no longer a valid index. Since you copy the object reference in ground5 to ground4, then 3, 2, 1, it eventually gets destroy in this way, at which point ground5.x returns nil and you get the exception you saw.
Tip: you should never have lists of variables that differ only by number (v1,v2,v3,etc.). That's what arrays are for. Rather than have 8 variables to hold ground images, you should have one array that holds all 8. Then you can use loops to perform operations like shifting them all N pixels.
For example, if we had your 8 images in a list ground (like ground = {ground1,ground2,ground3,ground4,ground5,ground6,ground7,ground8}, though you probably wouldn't initialize it that way), you could rewrite your code:
-- shift the ground 10 pixels to the right
for i,tile in pairs(ground) do
tile.x = tile.x - 10
end
if ground[1].x < (0 - 75) then
-- shift out the first tile
ground[1]:removeSelf()
for i=1,#ground-1 do
ground[i] = ground[i+1]
end
-- add a new tile to the end
local num = math.random ( 1, 4 )
ground[#ground] = display.newImage( group, "normalground"..num..".png", ground[#ground-1].x + ground[#ground-1].contentWidth/2, display.contentHeight - 52 )
end
The code is more succinct, doesn't need to be changed if you shift to 10 ground tiles or 100, and avoids errors like the one you made in your OP.

How to calculate arcTo parameters in Android path

I am porting some code to Android from Visual C++. The VC++ ArcTo function takes the bounding rectangle and the start and end points as parameters to define the arc. The android.graphics.Path function arcTo takes the bounding rectangle and the "start angle" and "sweep angle" as parameters.
I am not clear how to convert from the VC set of coordinates to the Android set, or what these two angles are. The arc also has direction (CW or ACW) - I am not clear how to incorporate these in a single Path, or how to switch between one and the other.
One oddity I came across is that in the Android function, angles are expressed in degrees, rather than radians which is what most calculations would use and what one would expect.
I hope my question makes some sort of sense and that someone can help!
Edit: following on from the help I got from Dr Dredel, and with much drawing of diagrams, here's how I eventually translated the VC++ call to Android:
else if (coord.isArc())
{
ptCentre = getPoint(new Coord(coord.getArcLat(), coord.getArcLong()));
nRadius = getPixels(coord.getArcRadius());
rect = new RectF(ptCentre.x - nRadius, ptCentre.y - nRadius,
ptCentre.x + nRadius, ptCentre.y + nRadius);
if (coord.isClockwise())
{
alpha = Math.atan2(ptCentre.y - ptStart.y, ptCentre.x - ptStart.x) *
Constants.k_d180Pi;
beta = Math.atan2(ptCentre.y - ptEnd.y, ptEnd.x - ptCentre.x) *
Constants.k_d180Pi;
path.arcTo(rect, (float)(alpha + 180), (float)(180 - beta - alpha));
}
else
{
}
As you can see, I haven't done the anti-clockwise arc yet, but it should be similar. My calculation wasn't perfect, as I originally had (360 - beta - alpha) instead of (180 - beta - alpha), and the original version gave some very funny results!
(Wow! this formatting mechanism is the other side of weird!)

Implement page curl on android?

I was surfing the net looking for a nice effect for turning pages on Android and there just doesn't seem to be one. Since I'm learning the platform it seemed like a nice thing to be able to do is this.
I managed to find a page here: http://wdnuon.blogspot.com/2010/05/implementing-ibooks-page-curling-using.html
- (void)deform
{
Vertex2f vi; // Current input vertex
Vertex3f v1; // First stage of the deformation
Vertex3f *vo; // Pointer to the finished vertex
CGFloat R, r, beta;
for (ushort ii = 0; ii < numVertices_; ii++)
{
// Get the current input vertex.
vi = inputMesh_[ii];
// Radius of the circle circumscribed by vertex (vi.x, vi.y) around A on the x-y plane
R = sqrt(vi.x * vi.x + pow(vi.y - A, 2));
// Now get the radius of the cone cross section intersected by our vertex in 3D space.
r = R * sin(theta);
// Angle subtended by arc |ST| on the cone cross section.
beta = asin(vi.x / R) / sin(theta);
// *** MAGIC!!! ***
v1.x = r * sin(beta);
v1.y = R + A - r * (1 - cos(beta)) * sin(theta);
v1.z = r * (1 - cos(beta)) * cos(theta);
// Apply a basic rotation transform around the y axis to rotate the curled page.
// These two steps could be combined through simple substitution, but are left
// separate to keep the math simple for debugging and illustrative purposes.
vo = &outputMesh_[ii];
vo->x = (v1.x * cos(rho) - v1.z * sin(rho));
vo->y = v1.y;
vo->z = (v1.x * sin(rho) + v1.z * cos(rho));
}
}
that gives an example (above) code for iPhone but I have no idea how I would go about implementing this on android. Could any of the Math gods out there please help me out with how I would go about implementing this in Android Java.
Is it possible using the native draw APIs, would I have to use openGL? Could I mimik the behaviour somehow?
Any help would be appreciated. Thanks.
****************EDIT**********************************************
I found a Bitmap Mesh example in the Android API demos: http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/BitmapMesh.html
Maybe someone could help me out on an equation to simply fold the top right corner inward diagnally across the page to create a similar effect that I can later apply shadows to to gie it more depth?
I'm doing some experimenting on page curl effect on Android using OpenGL ES at the moment. It's quite a sketch actually but maybe gives some idea how to implement page curl for your needs. If you're interested in 3D page flip implementation that is.
As for the formula you're referring to - I tried it out and didn't like the result too much. I'd say it simply doesn't fit small screen very well and started to hack a more simple solution.
Code can be found here:
https://github.com/harism/android_page_curl/
While writing this I'm in the midst of deciding how to implement 'fake' soft shadows - and whether to create a proper application to show off this page curl effect. Also this is pretty much one of the very few OpenGL implementations I've ever done and shouldn't be taken too much as a proper example.
I just created a open source project which features a page curl simulation in 2D using the native canvas: https://github.com/moritz-wundke/android-page-curl
I'm still working on it to add adapters and such to make it usable as a standalone view.
EDIT: Links updated.
EDIT: Missing files has been pushed to repo.
I'm pretty sure, that you'd have to use OpenGL for a nice effect. The basic UI framework's capabilities are quite limited, you can only do basic transformations (alpha, translate, rotate) on Views using animations.
Tho it might be possible to mimic something like that in 2D using a FrameLayout, and a custom View in it.

Categories

Resources