I want to ask a basic question just to make sure. When we use Vector2 class for representing some vector in andengine like when we use in joint creation as:
jointDef.localAnchorA.set(new Vector2(1, 1));
Do the values passed i.e. 1, 1 represent 1 meter each?
A little more explanation. Suppose I have created two bodies as:
Rectangle rect1 = new Rectangle(10, 10, 100, 100, vertexBufferObjectManager);
Body body1 = PhysicsFactory.createBoxBody(mPhysicsWorld, rect1, BodyType.DynamicBody, FIXTURE_DEF);
Rectangle rect2 = new Rectangle(110, 110, 50, 50, vertexBufferObjectManager);
Body body2 = PhysicsFactory.createBoxBody(mPhysicsWorld, rect2, BodyType.DynamicBody, FIXTURE_DEF);
And want to create a revolute joint at the position shown in the image below:
So what values for vectors localAnchorpointA and localAnchorPointB should I set to place the upper right corner of red rectangle touching the center of white rectangle? Like:
jointDef.localAnchorA.set(new Vector2(?, ?));
jointDef.localAnchorB.set(new Vector2(?, ?));
This would be very helpful in understanding the usage of vector2 class.
You need to understand what does a Vector2 class represent and stands for in game development. It basically encapsulates the provided coordinates in the 2D space. Taking it a step further, it has numerous applications for which it can be used, from distance calculations to other basic algebraic calculations. Stiegart Blog will give you a much clear idea about Vector2 in android. Hope it clarifies the confusion and misunderstanding.
The Vector2 has no associated units, you decide what the coordinates space looks like on the screen, when you render it (by choosing an appropriate camera). For angels You have to use radians.
But as I can see, You are passing it to the Box2D physics engine. It is recommanded to use units close to m/kg/s. Making your character 2m high would be a good choice. Making your spaceship Enterprise 10km across in Your asteroids game would be too much.
More information can be found in this post: http://box2d.org/2011/12/pixels/ .
Related
i am working on an Android app that will recognize a GO board and create a SGF file of it.
i made a version that is able to detect a board and warp the perspective to make it square ( code and example image below) unfortunately it gets a bit harder when adding stones.(image below)
Important things about a average go board:
round black and white stones
black lines on the board
board color ranges from white to light brown and sometimes with a wood grain
stones are placed on intersections of two lines
correct me if i am wrong but i think my current approach is not a good one.
Has somebody a general idea on how i can separate the stones and lines from the rest of the picture?
My code:
Mat input = inputFrame.rgba(); //original image
Mat gray = new Mat(); //grayscale image
//convert image to grayscale
Imgproc.cvtColor( input, gray, Imgproc.COLOR_RGB2GRAY);
//try to improve histogram (more contrast)
equalizeHist(gray, gray);
//blur image
Size s = new Size(5,5);
GaussianBlur(gray, gray, s, 0);
//apply adaptive treshold
adaptiveThreshold( gray, gray, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY,11,2);
//adding secondary treshold, removes a lot of noise
threshold(gray, gray, 0, 255, Imgproc.THRESH_BINARY + Imgproc.THRESH_OTSU);
Some images:
(source: eightytwo.axc.nl)
(source: eightytwo.axc.nl)
EDIT: 05-03-2016
Yay! managed to detect lines stones and color correctly. precondition the picture has to be only the board itself, without any other background visible.
I use houghLinesP (60lines) and houghCircles (17circles), duration on my phone(1th gen Moto G) about 5 seconds.
Detecting board and warp it turns out to be quite a challenge when it has to be working under different angles and lightning conditions.. still working on that
Suggestions for different approaches are still welcome!!
(source: eightytwo.axc.nl)
EDIT: 15-03-2016
i found a nice way to get line intersects with cross type morphological transformations, works amazing when the picture is taken directly above the board unfortunately not while at an angle (see below)
(source: eightytwo.axc.nl)
In my last update i showed line and stone detection with a picture taken from directly above since then i have been working on detecting the board and warping it in a way that my line and stone detection becomes useful.
harris corner detection
I struggled to get the right parameter settings and i am still not sure if they are optimal, can't find much information on how to optimize image before using harris corners. right now it detects to many corners to be useful. though it feels like it could work. (upper line with pictures in example)
Mat corners = new Mat();
Imgproc.cornerHarris(image, corners, 5, 3, 0.03);
Mat mask = new Mat(corners.size(), CvType.CV_8U, new Scalar(1));
Core.MinMaxLocResult maxVal = Core.minMaxLoc(corners);
Core.inRange(corners, new Scalar(maxVal.maxVal * 0.01), new Scalar(maxVal.maxVal), mask);
cross type morphological transformations
works great when picture is taken directly from above, used from an angle or with a rotated board does not work (middle line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
int morph_elem = 1; //0: Rect - 1: Cross - 2: Ellipse
int morph_size = 5;
int morph_operator = 0; //0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat
Mat element = getStructuringElement( morph_elem, new Size(2 * morph_size + 1, 2 * morph_size + 1), new Point( morph_size, morph_size ));
morphologyEx(image, image, morph_operator + 2, element);
contour and houghlines
if there are no stones on the outer boardline and light conditions not to harsh it works pretty well. contours are only part of the board quite often(lower line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
Mat hierarchy = new Mat();
MatOfPoint biggest = null;
int contourId = 0;
double biggestArea = 0;
double minSize = 2000;
List<MatOfPoint> contours = new ArrayList<>();
findContours(InvertedImage, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
//find biggest
for( int x = 0; x < contours.size() ; x++ ){
double area = Imgproc.contourArea(contours.get(x));
if( area > minSize && area > biggestArea ){
biggestArea = area;
biggest = contours.get(x);
contourId = x;
}
}
providing the right picture all three the methods work but not good enough to be reliable. any thoughts on parameters, image pre-processing, different approaches or anything that might improve the detection are welcome=)
link to picture
EDIT: 31-03-2016
detecting lines and stones is pretty much solved so i will close this question. created a new one for detecting and warping accurately.
anybody interested in my progress: this is my GOSU Snap Alpha channel don't expect to much of it right now!
EDIT: 16-10-2016
Update: i saw some people are still following this question.
I tested some more stuff and started using Tensorflow, my neural network looks promising, you can have a look at it here.
A lot of work has to be done still, my current image dataset is awful and right now i am working on getting a big dataset.
the app works best using a square board with thick lines and decent lightning.
Assuming that you don't want to "force" your end user to take a cleanest pictures (like using an overlay like some of the QR code scanner for example)
Perhaps you could use some morphological transformations with differents kernels :
Opening and closing with a rectangular kernel for the lines
Opening and closing with an ellipse kernel to get the stones (it should be possible to invert the image at some point to get back the white or the black one)
Take a look at http://docs.opencv.org/2.4/doc/tutorials/imgproc/opening_closing_hats/opening_closing_hats.html (sorry this one is in C++ but I think this is almost the same in Java)
I had try these operations to remove a grid from a Sudoku to avoid noise in cell extraction and it worked like a charm.
Let me know of these informations were usefull for you (this is for sure a very interesting case)
I'm working on same program. I avoid finding lines at all.
First use perspective transform to get the board into a square as you have done. Find the edges of the 19x19 grid. Then assuming the board is 19x19 you can just compute the position of the lines. This works well for me. Then you compute the closest intersection of the center of the stone to determine which row and col line the stone is on. Works pretty well for me. Only probably is calibrating program for different lighting conditions and different color stones and boards.
I am creating my first game in Andengine (GLES 2) and using Box2D for physics.
The collision detection works but doesn't seem to take into account the alpha values on the png files (I think this is what is happening) as the collision happens way before the two sprites actually touch. I don't need the collision to be pixel perfect just reasonably accurate.
This is how I set up the collision detection:
final CharacterSprite characterSprite = new CharacterSprite(CAMERA_WIDTH/2, CAMERA_HEIGHT/2, this.mCharacterTextureRegion, this.getVertexBufferObjectManager());
mPhysicsWorld = new FixedStepPhysicsWorld(60, new Vector2(0, 0), false);
scene.registerUpdateHandler(mPhysicsWorld);
playerBody = PhysicsFactory.createBoxBody(mPhysicsWorld, characterSprite, BodyType.DynamicBody, PhysicsFactory.createFixtureDef(0, 0, 0));
playerBody.setUserData("player");
characterSprite.setBody(playerBody);
mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector(characterSprite, playerBody, true, true));
mPhysicsWorld.setContactListener(createContactListener());
attachSprites(scene);
Thank you
Well then use shape other than a box (circle might be a good choice) or use a custom sized box that doesn't go all the way around your sprite. If you want even more precision go with polygon or even multiple polygons.
I am a newbie learner to JBox2D. I was just trying JBox2D for the first time on Android(I know Android development and I'm good in it) because my project needed physics.
Now, the tutorials and "Official User Manual" of Box2D said that negative gravity would result in objects being attracted downwards. But, in my case the object is being attracted upwards when I set Vec2's second parameter to be negative! Weird.
Here's the code which results in a circle shape going up on its own:
The Gravity:
Vec2 gravity = new Vec2(0.0f, -50.0f);
boolean doSleep = true;
world = new World(gravity, doSleep);
The circle shape is being made by following code:
//body definition
BodyDef bd = new BodyDef();
bd.position.set(200, 500);
bd.type = BodyType.DYNAMIC;
//define shape of the body.
CircleShape cs = new CircleShape();
cs.m_radius = 10f;
//define fixture of the body.
FixtureDef fd = new FixtureDef();
fd.shape = cs;
fd.density = 1f;
fd.friction = 0.2f;
fd.restitution = 0.8f;
//create the body and add fixture to it
body = world.createBody(bd);
body.createFixture(fd);
And I'm using SurfaceView canvas to draw:
canvas.drawCircle(body.getPosition().x, body.getPosition().y, 10, paint);
And stepp-ing as follows:
float timeStep = 1.0f / 60.f;
int velocityIterations = 6;
int positionIterations = 2;
world.step(timeStep, velocityIterations, positionIterations);
So, what's wrong within my code? I am unable to identify the mistake I've done.
Also,
I'm making a tennis-like 2D game on Android for which I'll be using JBox2D. So, can anybody tell me a tutorial/book on JBox2D? Though I googled vigorously, I couldn't find a good tutorial on it. (Though Box2D seems to be much popular instead of JBox2D)
I would be extremely grateful if someone could help me out here. Thank you.
In Box2D there is standard coordinate system: Y directed up, X to the right. In the graphic systems, usual, coordinate system has Y directed down, because window has static top-left corner. Looks like at your graphic system all the same. So, what in Box2D is moving down, you see as moving up.
It is the irritable problem, and directing gravity up is not the best solution. If you change only gravity, then you will need think about up-down problem in many other cases, for example, when define bodies, apply forces and so on. The most irritable, that it is not easy to understand, how physic coordinates conform graphic (for example, in one of my projects I had to draw points on paper, then turn paper back, rotate on 180 grades and look on the light :).
You can't change Box2D coordinate system, but, most likely, you can easy change coordinate system of graphic system by changing translation matrix. For example, in OpenGL it looks like this:
glScalef(1.0, -1.0, 1.0);
But take attention, after this, all that have positive Y coordinate will be not visible on the screen (it will be above the top edge of the window). So, you will need work with negative coordinates. If you don't want this, you can translate matrix down like this:
glTranslatef(0.0, -windowHeight.0, 0.0)
But before, think what to do if window would be resized.
About second question. I doubt whether you can find anywhere tutorial or book for JBox2D. JBox2D is port of Box2D (that means, it is exact copy of Box2D), and writing special book for it looks strange. Learn Box2D, and you will have no problem with JBox2D. For example, you can look there.
In Android I use a SurfaceView to display a simple 2D game. The bitmaps (.png) with alpha (representing the game objects) are drawn on the canvas.
Now I would like to do a simple but accurate collision detection. Checking whether these bitmaps are overlapping is quite easy.
But how do I check for collisions when these bitmaps have transparent areas? My challenge is detecting whether two balls collide or not. They fill up the whole bitmap in width and height both but in all four edges, there are transparent areas of course as it's a circle in a square.
What is the easiest way to detect collisions there only if the balls really collide, not their surrounding bitmap box?
Do I have to store coordinates of as many points on the ball's outline as possible? Or can Android "ignore" the alpha-channel when checking for collisions?
Another method I can think of will work with simple objects that can be constructed using Paths.
Once you have two objects whose boundaries are represented by paths, you may try this:
Path path1 = new Path();
path1.addCircle(10, 10, 4, Path.Direction.CW);
Path path2 = new Path();
path2.addCircle(15, 15, 8, Path.Direction.CW);
Region region1 = new Region();
region1.setPath(path1, clip);
Region region2 = new Region();
region2.setPath(path2, clip);
if (!region1.quickReject(region2) && region1.op(region2, Region.Op.INTERSECT)) {
// Collision!
}
Once you have your objects as Paths, you can draw them directly using drawPath(). You can also perform movement by transform()ing the path.
If it is ball collision you can perform analitical collision detection - it will be much faster then per-pixel detection. You only need to have two centers of balls (x1,y1) and (x2,y2) and radius r1 for the first ball and r2 for second one. Now if distance between centers of ball is less or equal of sum of radius then the balls are colliding:
colide = sqrt((x1-x2)^2+(y1-y2)^2)<=r1+r2
but a little faster way is to compare square of this value:
colide = (x1-x2)^2+(y1-y2)^2<=(r1+r2)^2
It's much easier to use an existing library like AndEngine instead of reinventing the wheel. I'm not sure if it can be used with a SurfaceView though. Check this article: Pixel Perfect Collision Detection for AndEngine.
I want a cylindrical, spider web like layout:
I know that I can use canvas to draw this but I also need all portions to be clickable, and canvas is very hard to handle touch for all portion.
Ideas?
can i want layout like spider...
Yes you can want it. But if you want to actually create that layout then you cannot do it with the standard android widgets.
If you want to make it then I would suggest drawing it on a Canvas manually and using the onTouchListener to catch the key presses.
I am not sure but i hope this can help you ...
The Path class holds a set of vector-drawing commands such as lines,
rectangles, and curves. Here’s an example that defines a circular path:
circle = new Path();
circle.addCircle(150, 150, 100, Direction.CW);
This defines a circle at position x=150, y=150, with a radius of 100
pixels. Now that we’ve defined the path, let’s use it to draw the circle’s
outline plus some text around the inside:
private static final String QUOTE = "Now is the time for all " +
"good men to come to the aid of their country." ;
canvas.drawPath(circle, cPaint);
canvas.drawTextOnPath(QUOTE, circle, 0, 20, tPaint);
You can see the result in this Figure
If you want to get really fancy, Android provides a number of PathEffect
classes that let you do things such as apply a random permutation to a
path, cause all the line segments along a path to be smoothed out with
curves or broken up into segments, and create other effects.