I'm new to AndEngine and Box2D. So bear with me please.
I created a new project, set up a 480x800 camera, added a 32x32 stripe, created a physics world at Earth gravity and dropped the stripe. Lo and behold, it DID drop. But it didn't seem "natural" to me; it was too slow.
Then I realized that the gravity is in meters (m/s2) whereas the environment is in pixels. Where does the conversion between meters and pixels tak place? Somewhere there should be an assumption behind the scenes. Do I have any control over it?
How does Box2D know whether it's dropping the stripe from 100 meters above the ground (and viewing it from a distance which would appear a very slow drop) or 1 meter above the ground (and viewing it from up close which would appear very fast)?
To test that the conversion is the real problem, I multiplied the gravity by 10 and it improved the "naturalness". But I think there should be a more sophisticated way to convert pixels to meters.
Thanks in advance. I really appreciate your comments.
It's as #iforce2d said it the comment. In AndEngine the default value is 32, therefore 32 pixels is considered one meter. When converting pixels to meter, divide the pixels by this value. When converting from meters to pixels, multiply with this value. You can find this value in org.andengine.extension.physics.box2d.util.constants.PhysicsConstants class.
The ratio is then used in PhysicsFactory.create... methods if you don't specify your own. These methods create the physics body for you, measuring your sprite size in pixels and passing meters to Box2D. It's also used in the PhysicsConnector class constructor. Use your own value if 32 doesn't suit you, but then you will have to be consistent and use it every time.
Related
I have a shaperenderer and some lines in it. In my renderer I have to change the lines alpha. What is a better way to do this instead of setColor(r, g, b, calculated_alpha)
I read that this always create a new object with new Color() what is not the best.
I have to do some calculation formula. Lets say distance between points. Is it also a good way to calculate those in all render cycle? Better way?
I am new in shaders but there is a lowp, mediump, highp precision. I have a Nexus6 and a Samsung g7. I cant see any different by the way on those precisions. What is those for? On a low end device may I have to add a lowp?
I just created a simple live wallpaper and my device sometimes a little hot. Can you help me on this?
1 . That's wrong. Look at the source code if you are in doubt. The method just sets the values for its current Color object and reuses it. No problem to set the color like this.
2 . Depends on where you need it. If the points are static and do not change then you want to calculate the distance once and reuse the result. If the points change position over time then you need to calculate the current distance within the render() method.
For calculating Pythagorean Theorem is usually used: http://www.mathwarehouse.com/algebra/distance_formula/index.php
If you use the Vector2 class to represent your points then you can just do:
float distance = point1.dst(point2);
dst() used the PT behind the scenes.
3 . You probably never will see a difference between them with your eyes. It is just how precise floating point numbers in your shader are. mediump is usually used.
I have to make a mobile app that calculates the real life size of an object in an image.
I have done some research on it and found helpful [question]: How would you find the height of objects given an image?
The relation of the distance of the camera and real life size of the object isn't actually that complex, the ratio of the size of the object on the sensor and the size of the object in real life is the same as the ratio between the focal length and distance to the object.
distance to object (mm) = focal length (mm) * real height of the object (mm) * image height (pixels)
---------------------------------------------------------------------------
object height (pixels) * sensor height (mm)
But how to get the value of real height of the object if distance is not known ?
Do the tools that create 3d models from images have real life dimensions?
The simple answer is you can't.
Incidentally, this is why humans have two eyes. If you want to judge size without a known distance, you'll need at least two reference points. This allows you to triangulate the position of the object, get a distance to it, and use your known focal distance to calculate the size.
The more complex answer is there are ways around this for example:
Cheat by using a known reference:
For example, if you have an object of known size, you can infer the distance. This is similar to what NASA does to calibrate its cameras, for example.
You can make safe assumptions if you're dealing with common objects, such as the height of one storey when analysing the image of a building.
Move your camera around:
This allows you to get more than one reference point with the same camera.
I suppose you could use the accelerometer to accurately measure the positional relation between the image captured at point T1 in time and point T2. This would give you two images of the same subject with a known distance between them. This then allows you to triangulate as if you had two eyes.
Whether normal hand-held camera jitters will be sufficient for triangulation, or whether the accelerometer will be accurate enough to inertially position the phone, I don't know.
Assume a distance:
If your app is designed to compare something on the scale of a human hand (or other bit of human anatomy), you can probably safely assume a distance based on what people will naturally do. The focus limits of the camera itself will also give an upper and lower range on how far an object can be and still be in focus. This will probably be within a tolerable margin of error.
As you mention in your question, there is an entire subfield dedicated to this question, and it is an active research area.
I've created a simple collision detection script, which works this way:
When the distance between the hero and an object is x pixels, the hero can "walk" x pixels, when he would not collide with an object (hero + 3px = no collision) he moves by 5 pixels.
But I also have to consider the framerate and therefor multiply his speed with the elapsed time /20
My problem is, when the framerate at some time is very low or high, he just moves by an additional pixel (1px) ..the chance is very small, but it still can happen.
so what can I do to prevent this?
Add a position correction to the end of post-collision-check or add a velocity correction to the end of pre-collision-check.
Post-collision: object is translated back to the collision point.
Pre-collision: object speed is altered temporarily so in the next frame it will be on the point of collision.
Example:
Your object moves 75 pixels and tunnells through the wall. What to
do? You need a position history of 1-iteration-back. Looking to
history, you see it was actually behind the wall, then the current
location-->it is now passed the wall xx pixels. Then you would set
its new position next to wall before painting.
You cannot know when it will lag your android: better algortihm needed to make it independent of fps. How? you may just pause whole world a bit until fps is steady again or just store the next few iteration before painting then calculate things before painting.
I am new to AndEngine. I have seen use of PIXEL_TO_METER_RATIO in lot of places but not able to understand when and how this constant is used. Can anyone guide to the right direction?
Box2d, the underlying physics engine used by AndEngine, uses meters as standard units. PIXEL_TO_METER_RATIO describes how many pixels in AndEngine are equivalent to one meter in the physics engine. For example, if you get a position of the Body, it will be in meters. You would multiply it by the ratio to get a position on the Scene.
the PTM_RATIO is defined because Box2D uses meters as standard units.
for example, a screen of 480*320 pixels usually equals to a 15*10 square meters box2d world, if PTM_RATIO is defined as 32.
Box2D works with floating point numbers and tolerances have to be used to make Box2D perform well. These tolerances have been tuned to work well with meters-kilogram-second (MKS) units. In particular, Box2D has been tuned to work well with moving objects between 0.1 and 10 meters. So this means objects between soup cans and buses in size should work well. Static objects may be up to 50 meters big without too much trouble.
reference: http://www.box2d.org/manual.html
This is my first post, so I apologise in advance if I have done anything wrong here in asking my question. I've looked all over the net for a specific answer, but can't find one, so here goes.....
I'm writing a game based on Surfaceview and so far, all is going well, however, I want to move my main sprite for example by 1 pixel on a 160DPI screen as a baseline (so basically 1 DIP as 1 pixel = 1 DIP on a 160DPI screen correct?)
I'm using the using the following forumla:
private static final float spritemovestep = 1f;
final float scale = getResources().getDisplayMetrics().density;
MoveX = (int) (spritemovestep * scale + 0.5f);
And then... something like
SpriteX=SpriteX+MoveX
First question - is this correct?
If it is, can someone explain what the +.05f is actually for, I've read that it's to 'round up to the nearest number' but....
if spritemovestep = 1, then on a 120DPI screen (which returns .75 as the scale I think) it would work out as: 1 x .75 + .5? which would be 1.25? So what is the .5 for?
Also what is the result when it's cast to an int value?
On some, the final result seems to be '0' on a low density screen so the sprite isn't moving at all.
Also some sprites which are supposed to be moving at different speed are moving at the same speed at certain densities.
I'm sure I'm being silly and missing something here but I just can't understand how this is supposed to work. If I want to move my sprite by 1 DIP/physical pixel on a MDPI screen how can it move less than 1 pixel on a LDPI screen?
Also, what is this formula I keep seeing:
px = dp * (dpi / 160) - When is this used?
Would really appreciate if someone could answer my questions.
Thanks all
The +0.5f is to round up to the nearest nukber, as you said. Ideally, when the number is scaled down for ldpi, a value of 1 becomes 0.75, which, when cast to an int is expressed as less than 1 or ~=0. By adding the rounding figure, this number is raised to 1.25 which, when cast as an int yields <2 or ~=1. This way, your sprite should be drawn with a minimum movement of 1. The only reason sprites that move at different speeds would move the same speeds is if they are so close that, when rounded, they wind up being the same size using the scale you gave. Altogether, your equation is very similar to others ive seen. Im making a game that uses surfaceview for the company i work for as well, and while i cant go into details on the code, your issue is one that i struggled with for sone time. Im not sure how your physics updates, but perhaps thats something you should check into, specifically, how it counts ticks for your game timer. It may be that your application is reading its ticks as being too close together to reach the point where it would hit the point of moving the 1.25 or 1 after casting to int, and therefore your sprite appears not to move. I briefly experienced that problem and at first was looking at my velocity until i found that the error was in the timer. One other thing i noticed is that your algorithm collects the density. On a mdpi device, does this return 1 or 160? That could make a big difference, but im not sure, as the equation i used was different. The other equation you found is a paraphrase of the equation listed in the development guide at android.developer.com to describe how the os converts pixels into dip. The reson people tend to quote that is to provide a reference to help others build their own algorithm for scaling appropriate for the jeeds of their app. Hopefully that helps, as its really the best answer i can give at this time. Sorry for any typing errors, im sending this from my phone