Currently I'm writing an augmented reality app and I have some problems to get the objects on my screen. It's very frustrating for me that I'm not able to transform gps-points to the correspending screen-points on my android device. I've read many articles and many other posts on stackoverflow (I've already asked similar questions) but I still need your help.
I did the perspective projection which is explained in wikipedia.
What do I have to do with the result of the perspective projection to get the resulting screenpoint?
The Wikipedia article also confused me when I read it some time ago. Here is my attempt to explain it differently:
The Situation
Let's simplify the situation. We have:
Our projected point D(x,y,z) - what you call relativePositionX|Y|Z
An image plane of size w * h
A half-angle of view α
... and we want:
The coordinates of B in the image plane (let's call them X and Y)
A schema for the X-screen-coordinates:
E is the position of our "eye" in this configuration, which I chose as origin to simplify.
The focal length f can be estimated knowing that:
tan(α) = (w/2) / f (1)
A bit of Geometry
You can see on the picture that the triangles ECD and EBM are similar, so using the Side-Splitter Theorem, we get:
MB / CD = EM / EC <=> X / x = f / z (2)
With both (1) and (2), we now have:
X = (x / z) * ( (w / 2) / tan(α) )
If we go back to the notation used in the Wikipedia article, our equation is equivalent to:
b_x = (d_x / d_z) * r_z
You can notice we are missing the multiplication by s_x / r_x. This is because in our case, the "display size" and the "recording surface" are the same, so s_x / r_x = 1.
Note: Same reasoning for Y.
Practical Use
Some remarks:
Usually, α = 45deg is used, which means tan(α) = 1. That's why this term doesn't appear in many implementations.
If you want to preserve the ratio of the elements you display, keep f constant for both X and Y, ie instead of calculating:
X = (x / z) * ( (w / 2) / tan(α) ) and Y = (y / z) * ( (h / 2) / tan(α) )
... do:
X = (x / z) * ( (min(w,h) / 2) / tan(α) ) and Y = (y / z) * ( (min(w,h) / 2) / tan(α) )
Note: when I said that "the "display size" and the "recording
surface" are the same", that wasn't quite true, and the min
operation is here to compensate this approximation, adapting the
square surface r to the potentially-rectangular surface s.
Note 2: Instead of using min(w,h) / 2, Appunta uses screenRatio=
(getWidth()+getHeight())/2 as you noticed. Both solutions preserve the elements
ratio. The focal, and thus the angle of view, will simply be a bit different,
depending on the screen's own ratio. You can actually use any function you want to
define f.
As you may have noticed on the picture above, the screen coordinates are here defined between [-w/2 ; w/2] for X and [-h/2 ; h/2] for Y, but you probably want [0 ; w] and [0 ; h] instead. X += w/2 and Y += h/2 - Problem solved.
Conclusion
I hope this will answer your questions. I'll stay near if it needs editions.
Bye!
< Self-promotion Alert > I actually made some time ago an article
about 3D projection and rendering. The implementation is in
Javascript, but it should be quite easy to translate.
Related
TL;DR
How come the accelerometer values I get from Sensor.TYPE_ACCELEROMETER are slightly offset? I don't mean by gravity, but by some small error that varies from axis to axis and phone to phone.
Can I calibrate the accelerometer? Or is there a standard way of compensating for these errors?
I'm developing an app that has a need for as precise acceleration measurements as possible (mainly vertical acceleration, i.e. same direction as gravity).
I've been doing A LOT of testing, and it turns out that the raw values I get from Sensor.TYPE_ACCELEROMETER are off. If I let the phone rest at a perfectly horizontal surface with the screen up, the accelerometer shows a Z-value of 9.0, where it should be about 9.81. Likewise, if I put the phone in portrait or landscape mode, the X- and Y- accelerometer values show about 9.6. instead of 9.81.
This of course affects my vertical acceleration, as I'm using SensorManager.getRotationMatrixFromVector(), to calculate the vertical acceleration, resulting in a vertical acceleration that is off by a different amount depending on the rotation of the device.
Now, before anyone jumps the gun and mentions that I should try using Sensor.TYPE_LINEAR_ACCELERATION instead, I must point out that I'm actually doing that as well, parallel to the TYPE_ACCELERATION. By using the gravity sensor I then calculate the vertical acceleration (as described in this answer). The funny thing is that I get EXACTLY the same result as the method that uses the raw accelerometer, SensorManager.getRotationMatrixFromVector() and matrix multiplication (and finally subtracting gravity).
The only way I'm able to get almost exactly zero vertical acceleration for a stationary phone in any rotation is to get the raw accelerometer values, add an offset (from earlier observations, i.e. X+0.21, Y+0.21 and Z+0.81) and then performing the rotation matrix stuff to get the world coordinate system accelerations. Note that since it's not just the calculated vertical acceleration that is wrong - it's actually the raw values from Sensor.TYPE_ACCELEROMETER, which I would think excludes other error sources like gyroscope sensor, etc?
I have tested this on two different phones (Samsung Galaxy S5 and Sony Xperia Z3 compact), and both have these accelerometer value deviances - but of course not the same values on both phones.
How come the the values of Sensor.TYPE_ACCELEROMETER are off, and is there a better way of "calibrating" the accelerometer than simply observing how much they deviate from gravity and adding the difference to the values before using them?
You should calibrate gains, offsets, and angle of the 3 accelerometers.
Unfortunately it's not possible to deepen the whole topic here.
I'll write a small introduction, describing the basic concept, and then I'll post a link to the code of a simple Clinometer that implements the calibration.
The calibration routine could be done with 7 misurations (calculate the mean value of a good number of samples) in different ortogonal positions at your choice, in order to have all +-0 and +-g values of your accelerometers. For example:
STEP 1 = Lay flat
STEP 2 = Rotate 180°
STEP 3 = Lay on the left side
STEP 4 = Rotate 180°
STEP 5 = Lay vertical
STEP 6 = Rotate 180° upside-down
STEP 7 = Lay face down
Then you can use the 7 measurements mean[][] to calculate offsets and gains:
calibrationOffset[0] = (mean[0][2] + mean[0][3]) / 2;
calibrationOffset[1] = (mean[1][4] + mean[1][5]) / 2;
calibrationOffset[2] = (mean[2][0] + mean[2][6]) / 2;
calibrationGain[0] = (mean[0][2] - mean[0][3]) / (STANDARD_GRAVITY * 2);
calibrationGain[1] = (mean[1][4] - mean[1][5]) / (STANDARD_GRAVITY * 2);
calibrationGain[2] = (mean[2][0] - mean[2][6]) / (STANDARD_GRAVITY * 2);
using the values of mean[axis][step], where STANDARD_GRAVITY = 9.81.
Then apply the Gain and Offset Corrections to measurements:
for (int i = 0; i < 7; i++) {
mean[0][i] = (mean[0][i] - calibrationOffset[0]) / calibrationGain[0];
mean[1][i] = (mean[1][i] - calibrationOffset[1]) / calibrationGain[1];
mean[2][i] = (mean[2][i] - calibrationOffset[2]) / calibrationGain[2];
}
and finally calculates the correction angles:
for (int i = 0; i < 7; i++) {
angle[0][i] = (float) (Math.toDegrees(Math.asin(mean[0][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
angle[1][i] = (float) (Math.toDegrees(Math.asin(mean[1][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
angle[2][i] = (float) (Math.toDegrees(Math.asin(mean[2][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
}
calibrationAngle[2] = (angle[0][0] + angle[0][1])/2; // angle 0 = X axis
calibrationAngle[1] = -(angle[1][0] + angle[1][1])/2; // angle 1 = Y axis
calibrationAngle[0] = -(angle[1][3] - angle[1][2])/2; // angle 2 = Z axis
You can find a simple but complete implementation of a 3-axis calibration in this opensource Clinometer app: https://github.com/BasicAirData/Clinometer.
There is also the APK and the link of the Google Play Store if you want to try it.
You can find the calibration routine in CalibrationActivity.java;
The calibration parameters are applied in ClinometerActivity.java.
Furthermore, you can find a very good technical article that deepens the 3-axis calibration here: https://www.digikey.it/it/articles/using-an-accelerometer-for-inclination-sensing.
Does RenderScript guarantee the memory layout or stride in global pointers bound from the Java layer?
I read somewhere that it is best to use rsGetElementAt / rsSetElementAt functions because the layout is not guaranteed.
But elsewhere it was said to avoid those when targetting GPU optimizations, whereas bound pointers are ok.
In my particular case, I need the kernel to access the value of many surrounding pixels. So far, I have done quite well with float pointers bound from the Java layer.
Java:
script.set_width(inputWidth);
script.bind_input(inputAllocation);
RS:
int width;
float *input;
void root(const float *v_in, float *v_out, uint32_t x, uint32_t y) {
int current = x + width * y;
int above = current - width;
int below = current + width;
*v_out = input[above - 1] + input[above ] + input[above + 1] +
input[current - 1] + input[current] + input[current + 1] +
input[below - 1] + input[below ] + input[below + 1] ;
}
This is a trivial simplification of what I'm actually doing, just to easily illustrate with an example. In reality, I'm doing far more of these combinations and with multiple input images at the same time, so much so, that simly pre-computing the positions for the "above" and "below" rows helps a great deal with the processing time.
As long as memory is guaranteed to be sequential and in the same order you'd normally expect, all is good, and so far I haven't had any problems on my test devices.
But if this memory layout is truly not guaranteed across all devices/processors, and the stride can actually vary, then my code would obviously break and I'd be forced to use rsGetElementAt, such as:
Java:
script.set_input(inputAllocation);
RS:
rs_allocation input;
void root(const float *v_in, float *v_out, uint32_t x, uint32_t y) {
*v_out = rsGetElementAt_float(input, x - 1, y - 1) + rsGetElementAt_float(input, x, y - 1) + rsGetElementAt_float(input, x + 1, y - 1) +
rsGetElementAt_float(input, x - 1, y ) + rsGetElementAt_float(input, x, y ) + rsGetElementAt_float(input, x + 1, y ) +
rsGetElementAt_float(input, x - 1, y + 1) + rsGetElementAt_float(input, x, y + 1) + rsGetElementAt_float(input, x + 1, y + 1) ;
}
The average execution time of the script using rsGetElementAt() (710 ms) is almost twice as much as that of the kernel using input[] (390 ms), I'm guessing because each call must independently re-compute the memory offset for the given x,y coordinates.
My script needs to run continuously, so I'm trying to get every possible bit of performance out of it, and it would be a real pity to ignore such a considerable speedup.
So I'm wondering if anyone could shed some light on this.
Are there really any cases under which bound pointers will not be fully sequential, and is there a way to force them to be?
Is rsGetElementAt() truly necessary in this case, or is it safe to keep using bound pointers relying on a pre-defined stride?
Bound pointers are only guaranteed to be sequential for simple 1D allocations. Any type with more than one dimension should be accessed with get/setElementAt_.
Comments on performance:
rsGetElementAt_float() will typically outperform rsGetElementAt() because it knows the type and can avoid the lookup for stride. This is true of all the typed get/set methods.
Which OS version are you testing on? 4.4 brought some major improvements to this type of code which should be able to pull the address calculations out of the loops for many cases.
The manipulate the pointers approach will force some GPU driver to fallback to the safe path.
Some newer drivers (4.4.1) will be using the HW address calculation unit removing the overhead completely.
So this is my second question today, I might be pushing my luck
In short making a 3D first Person, where you can move about and look around.
In My OnDrawFrame I am using
Matrix.setLookAtM(mViewMatrix, 0, eyeX , eyeY, eyeZ , lookX , lookY , lookZ , upX, upY, upZ);
To move back, forth, sidestep left etc I use something like this(forward code listed)
float v[] = {mRenderer.lookX - mRenderer.eyeX,mRenderer.lookY - mRenderer.eyeY, mRenderer.lookZ - mRenderer.eyeZ};
mRenderer.eyeX += v[0] * SPEED_MOVE;
mRenderer.eyeZ += v[2] * SPEED_MOVE;
mRenderer.lookX += v[0] * SPEED_MOVE;
mRenderer.lookZ += v[2] * SPEED_MOVE;
This works
Now I want to look around and I tried to port my iPhone openGL 1.0 code. This is left/right
float v[] = {mRenderer.lookX - mRenderer.eyeX,mRenderer.lookY - mRenderer.eyeY, mRenderer.lookZ - mRenderer.eyeZ};
if (x > mPreviousX )
{
mRenderer.lookX += ((Math.cos(SPEED_TURN / 2) * v[0]) - (Math.sin(SPEED_TURN / 2) * v[2]));
mRenderer.lookZ += ((Math.sin(SPEED_TURN / 2) * v[0]) + (Math.cos(SPEED_TURN / 2) * v[2]));
}
else
{
mRenderer.lookX -= (Math.cos(SPEED_TURN / 2) *v[0] - Math.sin(SPEED_TURN / 2) * v[2]);
mRenderer.lookZ -= (Math.sin(SPEED_TURN / 2) *v[0] + Math.cos(SPEED_TURN / 2) * v[2]);
}
This works for like 35 degrees and then goes mental?
Any ideas?
First of all I would suggest not to trace the look vector but rather forward vector, then in lookAt method use eye+forward to generate look vector. This way you can loose the update on the look completely when moving, and you don't need to compute the v vector (mRenderer.eyeX += forward.x * SPEED_MOVE;...)
To make things more simple I suggest that you normalize the vectors forward and up whenever you change them (and I will consider as you did in following methods).
Now as for rotation there are 2 ways. Either use right and up vectors to move the forward (and up) which is great for small turning (I'd say about up to 10 degrees and is capped at 90 degrees) or compute the current angle, add any angle you want and recreate the vectors.
The first mentioned method on rotating is quite simple:
vector forward = forward
vector up = up
vector right = cross(forward, up) //this one might be the other way around as up, forward :)
//going left or right:
forward = normalized(forward + right*rotationSpeedX)
//going up or down:
forward = normalized(forward + up*rotationSpeedY)
vector right = cross(forward, up) //this one might be the other way around
vector up = normalized(cross(forward, right)) //this one might be the other way around
//tilt left or right:
up = normalized(up + right*rotationZ)
The second method needs a bit trigonometry:
Normally to compute an angle you could just call atan(forward.z/forward.x) and add some if statements since the produced result is only in 180 degrees angle (I am sure you will be able to find some answers on the web to get rotation from vector though). The same goes with up vector for getting the vertical rotation. Then after you get the angles you can easily just add some degrees to the angles and recreate the vectors with sin and cos. There is a catch though, if you rotate the camera in such way, that forward faces straight up(0,1,0) you need to get the first rotation from up vector and the second from forward vector but you can avoid all that if you cap the maximum vertical angle to something like +- 85 degrees (and there are many games that actually do that). The second thing is if you use this approach your environment must support +-infinitive or this atan(forward.z/forward.x) will brake if forward.x == 0.
And some addition about the first approach. Since I see you are trying to move around the 2D space your forward vector to use with movement speed should be normalized(forward.x, 0, forward.z), it is important to normalize it or you will be moving slower if camera tilts up or down more.
Second thing is when you rotate left/right you might want to force up vector to (0,1,0) + normalize right vector and lastly recreate the up vector from forward and right. Again you then should cap the vertical rotation (up.z should be larger then some small value like .01)
It turned out my rotation code was wrong
if (x > mPreviousX )
{
mRenderer.lookX = (float) (mRenderer.eyeX + ((Math.cos(SPEED_TURN / 2) * v[0]) - (Math.sin(SPEED_TURN / 2) * v[2])));
mRenderer.lookZ = (float) (mRenderer.eyeZ + ((Math.sin(SPEED_TURN / 2) * v[0]) + (Math.cos(SPEED_TURN / 2) * v[2])));
}
else
{
mRenderer.lookX = (float) (mRenderer.eyeX + ((Math.cos(-SPEED_TURN / 2) * v[0]) - (Math.sin(-SPEED_TURN / 2) * v[2])));
mRenderer.lookZ = (float) (mRenderer.eyeZ + ((Math.sin(-SPEED_TURN / 2) * v[0]) + (Math.cos(-SPEED_TURN / 2) * v[2])));
}
Something seems funny about the way that Android's frustumM works. If I check the OpenGL red book, the matrix generated looks like this:
(source: glprogramming.com)
Songho.ca seems to agree with this:
(source: songho.ca)
However, one component is multiplied by 2 with Android's frustumM, and not in the other example matrices. Here's what it seems to be doing:
Everything seems to functionally match up, except the first row, third column. Why is that being multiplied by two? Here's the lines of code from android.opengl.Matrix's frustumM method that generate the first three elements of the third column:
final float A = 2.0f * ((right + left) * r_width);
final float B = (top + bottom) * r_height;
final float C = (far + near) * r_depth;
With r_width, r_height, r_depth defined as:
final float r_width = 1.0f / (right - left);
final float r_height = 1.0f / (top - bottom);
final float r_depth = 1.0f / (near - far);
The line starting with "final float A" appears to be mistakenly multiplying by 2.
Is this a mistake in Android's code, or am I just missing something? I know that the term cancels out if the frustum is symmetrical. When running the code with an asymmetrical frustum, the generated matrices actually are different and so are the resulting vectors when the same vector is multiplied with those differing matrices.
It's a bug with Android. Please see http://code.google.com/p/android/issues/detail?id=35646
((I'd preffer just to comment but I'm not allowed.))
Thank you guys for the insight. I just had to add
mMyMatrix[8] /= 2f;
after
Matrix.frustrumM(mMyMatrix, ...)
To solve my aspect ratio problems :)
yes, if you call the function with a (-ratio, ratio, -1, 1, 1, 10) parameters set, it does not cause the probelm, but if you call with (right != -1 * left), it makes thing different.
I find this issue when i check the source code. sigh.
I'm currently developing my first android app, and my first game. I've been developing on a netbook with a CliqXT (HVGA). Things are going well, it renders perfectly on the smaller screen. I knew I'd have some issues when rendering on larger screens, but the issues I'm having are not what I was expecting and I'm kind of stuck.
So basically the game consists of a main SurfaceView which I'm rendering the tiled game world on to. I followed this tutorial to get started, and my structure is still pretty similar except that it calculates the boundries based on the player location:
http://www.droidnova.com/create-a-scrollable-map-with-cells-part-i,654.html
The game also has various buildings the player can enter. Upon entering it launches another activity for that particular building. The building activities are just normal Views with Android UI stuff defined in XML (Buttons, TextViews, etc).
What I expected to happen:
So I expected the the building UIs to render correctly on the larger screen. I specified all dimensions in "dp" and fonts in "sp" in hopes that they'd scale correctly. I expected the actual game tilemap to render generally correctly, but maybe be really tiny due to the higher resolution / dpi. I'm using a very similar function to the tutorial linked above (calculateLoopBorders(), my version is pasted below) to calculate how many tiles to render based on screen height and width (getHeight() and getWidth()).
What is actually happening:
The whole game is just being rendered as if it's HVGA. The tilemap, and the building UIs are just scaled down to the smaller screen size, leaving black borders around the left, right, and bottom (see images).
If anyone can point me in the right direction it'd be greatly appreciated, thanks a lot!
(Some of you may recognize this public domain DOS classic)
Edit: Thanks Christian for fixing code formatting.
mCellHeight and mCellWidth are the width/height of the cells in pixels
mMapHeight and mMapWidth are the width/height of the total game world in number of tiles
public void calculateLoopBorders() {
mWidth = getWidth();
mHeight = getHeight();
mStartRow = (int) Math.max(0, mPlayer.mRow - ((int) (mHeight / 2) / mCellHeight));
mStartCol = (int) Math.max(0, mPlayer.mCol - ((int) (mWidth / 2) / mCellWidth));
mMaxRow = (int) Math.min(mMapHeight, mStartRow + (mHeight / mCellHeight)) + 1;
mMaxCol = (int) Math.min(mMapWidth, mStartCol + (mWidth / mCellWidth));
if (mMaxCol >= mMapWidth) {
mStartCol = mMaxCol - (mWidth / mCellWidth);
}
if (mMaxRow >= mMapHeight) {
mStartRow = mMaxRow - (mHeight / mCellHeight);
}
int x1 = mStartCol * mCellWidth;
int y1 = mStartRow * mCellHeight;
int x2 = x1 + mWidth;
int y2 = y1 + mHeight;
mBgSrcRect = new Rect(x1, y1, x2, y2);
mBgDestRect = new Rect(0,0, mWidth, mHeight);
}
I figured it out. I was targeting 1.5 in the Project so it was assuming HVGA. Targeting 2.1 fixes the issue and the bitmaps even seem to scale correctly using some kind of android magic.
I still have a question though, when I finish this game I want it to work with 1.5+ devices. Do I need to put separate builds into the market, one for each device class? This seems like a lot of trouble for something that could be handled in a line or 2 of code in the app itself... but I've never released an app so maybe it's easily handled in the process.