I have a list of locations(longitude,latitude) to which i would like to calculate the standard deviation but i'm clueless how to do it in two dimensions. any ideas ?
The concept of a single standard deviation does not generalize well to two dimensions. You can take the standard deviation of each component separately and you get two standard deviations. Then you can combine these into a single number taking sqrt(X^2 + Y^2). This gives you a measure of how far, on average, the points are from the center of the point cloud.
The concept of variance, which is the square of the standard deviation, can be generalized, and it becomes the covariance matrix.
Related
I have two ArrayLists with double data. I am already using moving average smoothing. Data is collected every 200-500ms. This is what a typical graph (using GraphView in Android) looks like:
Since the data collection rate is limited by the hardware I am using, this is how jagged the result looks. Very easy to see individual points.
How do I make the function look smooth and continuous (either mathematically by altering the ArrayLists or by changing some setting in GraphView?
Is polynomial fit the way to go, or should I use a combination of filtering and moving average?
I appreciate it!
It depends on how you want to smooth out the functions. Your problem is that you do not have enough data points to make it look smooth since graphview draws a straight line between two data points. I do not think there is a way to draw a curve between two points in GraphView without using custom views. Custom view is another beast so I do not think you want to do that. There are two ways now that you can solve this problem.
First way, if your know beforehand that your data does not contain high noise, then you can perform polynomial interpolation of all the points in your arraylist. From that, you will get a function which you can use to create a new arraylist and calculate the y-values in smaller x-steps than what your data has. This way, you can simulate a curve. But do note that interpolation is costly especially as you go to higher orders of polynomials.
Now, if you know that your data is noisy, interpolation will interpolate (fit) all of the noises and that is not what you want. Then you want to use the least squares method. You can go for any orders of polynomial. Use the one that makes the most sense. Least squares will be less time consuming to compute than interpolation and it has the advantage that if your noises are bias-free (meaning the sum of the noises add up to 0), it might provide you with a better approximation of real values. Also note that in least squares, your computation becomes significantly easier if your x-values in your data are separated from each other uniformly. In your case, this could be true since you mention that you collect data every 200-500ms. If you can poll at a fixed rate, then do it. There are various equations on the internet which provides easy least squares calculations given fixed intervals.
I have a shaperenderer and some lines in it. In my renderer I have to change the lines alpha. What is a better way to do this instead of setColor(r, g, b, calculated_alpha)
I read that this always create a new object with new Color() what is not the best.
I have to do some calculation formula. Lets say distance between points. Is it also a good way to calculate those in all render cycle? Better way?
I am new in shaders but there is a lowp, mediump, highp precision. I have a Nexus6 and a Samsung g7. I cant see any different by the way on those precisions. What is those for? On a low end device may I have to add a lowp?
I just created a simple live wallpaper and my device sometimes a little hot. Can you help me on this?
1 . That's wrong. Look at the source code if you are in doubt. The method just sets the values for its current Color object and reuses it. No problem to set the color like this.
2 . Depends on where you need it. If the points are static and do not change then you want to calculate the distance once and reuse the result. If the points change position over time then you need to calculate the current distance within the render() method.
For calculating Pythagorean Theorem is usually used: http://www.mathwarehouse.com/algebra/distance_formula/index.php
If you use the Vector2 class to represent your points then you can just do:
float distance = point1.dst(point2);
dst() used the PT behind the scenes.
3 . You probably never will see a difference between them with your eyes. It is just how precise floating point numbers in your shader are. mediump is usually used.
In OpenCV I use the camera to capture a scene containing two squares a and b, both at the same distance from the camera, whose known real sizes are, say, 10cm and 30cm respectively. I find the pixel widths of each square, which let's say are 25 and 40 pixels (to get the 'pixel-width' OpenCV detects the squares as cv::Rect objects and I read their width field).
Now I remove square a from the scene and change the distance from the camera to square b. The program gets the width of square b now, which let's say is 80. Is there an equation, using the configuration of the camera (resolution, dpi?) which I can use to work out what the corresponding pixel width of square a would be if it were placed back in the scene at the same distance as square b?
The math you need for your problem can be found in chapter 9 of "Multiple View Geometry in Computer Vision", which happens to be freely available online: https://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf.
The short answer to your problem is:
No not in this exact format. Given you are working in a 3D world, you have one degree of freedom left. As a result you need to get more information in order to eliminate this degree of freedom (e.g. by knowing the depth and/or the relation of the two squares with respect to each other, the movement of the camera...). This mainly depends on your specific situation. Anyhow, reading and understanding chapter 9 of the book should help you out here.
PS: to me it seems like your problem fits into the broader category of "baseline matching" problems. Reading around about this, in addition to epipolar geometry and the fundamental matrix, might help you out.
Since you write of "squares" with just a "width" in the image (as opposed to "trapezoids" with some wonky vertex coordinates) I assume that you are considering an ideal pinhole camera and ignoring any perspective distortion/foreshortening - i.e. there is no lens distortion and your planar objects are exactly parallel to the image/sensor plane.
Then it is a very simple 2D projective geometry problem, and no separate knowledge of the camera geometry is needed. Just write down the projection equations in the first situation: you have 4 unknowns (the camera focal length, the common depth of the squares, the horizontal positions of their left sides (say), and 4 equations (the projections of each of the left and right sides of the squares). Solve the system and keep the focal length and the relative distance between the squares. Do the same in the second image, but now with known focal length, and compute the new depth and horizontal location of square b. Then add the previously computed relative distance to find where square a would be.
In order to understand the transformations performed by the camera to project the 3D world in the 2D image you need to know its calibration parameters. These are basically divided into two sets:
Intrensic parameters: These are fixed parameters that are specific for each camera. They are normally represented by a Matrix called k.
Extrensic parameters: These depend on the camera position in the 3D world. Normally they are represented by two matrices: R and T where the first one represents the rotation and the second one represents the translation
In order to calibrate a camera your need some pattern (basically a set of 3D points which coordinates are known). There are several examples for this in OpenCV library which provides support to perform the camera calibration:
http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
Once you have your camera calibrated you can transform from 3D to 2D easily by the following equation:
Pimage = K · R · T · P3D
So it will not only depend on the position of the camera but it depends on all the calibration parameters. The following presentation go through the camera calibration details and the different steps and equations that are used during the 3D <-> Image transformations.
https://www.cs.umd.edu/class/fall2013/cmsc426/lectures/camera-calibration.pdf
With this in mind you can project whatever 3D point to the image and get its coordinate on it. The reverse transformation is not unique since going back from 2D to 3D will give you a line instead of a unique point.
I have to make a mobile app that calculates the real life size of an object in an image.
I have done some research on it and found helpful [question]: How would you find the height of objects given an image?
The relation of the distance of the camera and real life size of the object isn't actually that complex, the ratio of the size of the object on the sensor and the size of the object in real life is the same as the ratio between the focal length and distance to the object.
distance to object (mm) = focal length (mm) * real height of the object (mm) * image height (pixels)
---------------------------------------------------------------------------
object height (pixels) * sensor height (mm)
But how to get the value of real height of the object if distance is not known ?
Do the tools that create 3d models from images have real life dimensions?
The simple answer is you can't.
Incidentally, this is why humans have two eyes. If you want to judge size without a known distance, you'll need at least two reference points. This allows you to triangulate the position of the object, get a distance to it, and use your known focal distance to calculate the size.
The more complex answer is there are ways around this for example:
Cheat by using a known reference:
For example, if you have an object of known size, you can infer the distance. This is similar to what NASA does to calibrate its cameras, for example.
You can make safe assumptions if you're dealing with common objects, such as the height of one storey when analysing the image of a building.
Move your camera around:
This allows you to get more than one reference point with the same camera.
I suppose you could use the accelerometer to accurately measure the positional relation between the image captured at point T1 in time and point T2. This would give you two images of the same subject with a known distance between them. This then allows you to triangulate as if you had two eyes.
Whether normal hand-held camera jitters will be sufficient for triangulation, or whether the accelerometer will be accurate enough to inertially position the phone, I don't know.
Assume a distance:
If your app is designed to compare something on the scale of a human hand (or other bit of human anatomy), you can probably safely assume a distance based on what people will naturally do. The focus limits of the camera itself will also give an upper and lower range on how far an object can be and still be in focus. This will probably be within a tolerable margin of error.
As you mention in your question, there is an entire subfield dedicated to this question, and it is an active research area.
I'm writing this game on Android where I have a bunch of characters moving around who collide with each other. Everything works fine but when I get passed a certain number of characters on the screen at the same time, the performance of the app gets hit severely. I did my tests and drawing is not causing the low frame rate, it is the algorithm for collision detection, since every time they move they have to check their location to all the other characters. So currently I'm just looping through them all for each character. Is there a way to improve on this? Is there a performance trick to collision detection on a big number of objects that I don't know about?
Yes, there is a technique based on a first broad-phase and second narrow-phase colission detection.
I'll quote some paragraps from: Beginning Android Games, by Mario Zechner.
Broad phase: In this phase we try to figure out which objects can
potentially collide. Imagine having 100 objects that could each
collide with each other. We’d need to perform 100 * 100 / 2 overlap
tests if we chose to naively test each object against each other
object. This naive overlap testing approach is of O(n^2) asymptotic
complexity, meaning it would take n^2 steps to complete (it actually
finished in half that many steps, but the asymptotic complexity
leaves out any constants). In a good, non-brute-force broad phase, we
try to figure out which pairs of objects are actually in danger of
colliding. Other pairs (e .g., two objects that are too far apart for
a collision to happen) will not be checked . We can reduce the
computational load this way, as narrow-phase testing is usually pretty
expensive.
Narrow phase: Once we know which pairs of objects can potentially
collide, we test whether they really collide or not by doi ng an
overlap test of their bounding shapes.
The broad phase involves dividing the world in large cells, making some sort of grid.
Each cell has the exact same size, and the whole world is covered in cells. If two objects are not in the same cell, a narrow phase for those two objects is not needed.
Quote once again:
All we need to do is the following:
Update all objects in the world based on our physics and controller step.
Update the position of each bounding shape of each object according to the object’s position. We can of course also include the orientation and scale as well here.
Figure out which cell or cells each object is contained in based on its bounding shape, and add it to the list of objects contained in those cells.
Check for collisions, but only between object pairs that can collide (e.g., Goombas don’t collide with other Goombas) and are in the same cell.
This is called a spatial hash grid broad phase, and it is very easy to implement. The first thing we have to define is the size of each cell. This is highly dependent on the scale and units we use for our game’s world.
It also depends on the bounding shape you're using. A simple rectangle or circle around the characters and it's euclidean distance is one simple thing to calculate, but a finer shape (including details as "the head", "the legs" with little additional bounding shapes) will be more a lot more computationally expensive to calculate.
If all objects are free to move to any part of the screen, then the best you can do is your O(n^2) algorithm. You can improve it by a constant factor by realizing that when you check if object A collides with object B, then you don't have to later check if object B collides with object A.
enclose each character within a fixed size square. Before you check for character collision, check if the squares in which they are enclosed collide. If and only if the squares collide, there would be a chance for the characters to collide. Now checking for squares collision is easy as you have to just compare the x & y co-ordinates.
Dividing into a broad phase and narrow phase as Federico suggests only helps if your collision detection algorithm is expensive, i.e. it's not a simple bounding box.
Fortunately there are other options.
You could try a collision mask technique. Since you don't seem to be limited by rendering speed, render a bounding box for each object into a hidden bitmap. Before rendering the next object, check the pixels at the four corners of its bounding box to see if they have already been written. You can even use a different colour for each object so that the colour tells you which object the collision was with.
Another popular trick is to simply not do every collision check every frame. For example, games like Super Mario Bros actually only check for collisions between the player and enemies every other frame. You can do a more advanced version where you check all objects in a round-robin fashion, doing as many as it can per frame. When things get busy each object might only be checked every other or even every third frame, but the player is unlikely to notice. This works best if your objects are not moving so fast that they can pass through each other one only one frame of collision.