Android: Achieving smooth roll of ball using accelerometer - android

I have developed a maze game for Android where you control the ball by tilting the phone.
So I use the accelerometer and integrate the x and y accelerometer values and then move the ball a step in that direction.
I have a problem though, I cannot achieve a very smooth roll. When the ball picks up speed it is to obvious that it jumps in big discrete steps. I have seen other apps like this where the ball rolls fast but smoothly.
So I might have to change my strategy, use some sort of time solution instead. Now the faster the speed the bigger the step I move. Instead maybe I should have a timer that moves the ball 1 pixel every ms if speed is high or only every 10th ms if the speed is low or something along those lines.
Or how do people achieve a smoother roll?
Also: Would you use OpenGL for this?

What you're really doing here is integrating coupled differential equations. Don't worry if you haven't taken enough calculus or physics to know what that means.
People who integrate coupled differential equations for a living have evolved many algorithms to do so efficiently.
You really have four equations here: acceleration in x- and y-directions and velocity in x- and y-directions:
dvx/dt = ax
dvy/dt = ax
dsx/dt = vx
dxy/dt = vy
(sx, sy) give the position of the ball at a given time. You'll need initial conditions for (sx, sy) and (vx, vy).
It sounds like you've chosen the simplest way to integration ODEs: Euler explicit integration. You calculate the values at the end of a step from the values at the beginning plus the rate of change times the time step:
(vx, vy)_1 = (vx, vy)_0 + (ax, ay)_0 * dt
(sx, sy)_1 = (sx, sy)_0 + (vx, vy)_0 * dt
It's easy to program, but it tends to suffer from stability problems under certain conditions if your time step is too large.
You can shrink your time step, which will force you to perform the calculations many more times, or switch to another integration scheme. Search for implicit integration, Runge-Kutta, etc.
Integration and rendering are separate problems.

Related

How to use the numbers from Game Rotation Vector in Android?

I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRo‌tationMatrixFromVecto‌r converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.

Is this Fourier Analysis of Luminance Signals Correct? (Android)

I'm writing an Android app that measures the luminance of camera frames over a period of time and calculates a heart beat using Fourier Analysis to find the wave's frequency. The problem is that my spectral analysis looks like this:
which is pretty much the inverse of what a spectral analysis should look like (like a normal distribution). Can I accurately assess this to find the index of the maximum magnitude, or does this spectrum reveal that my data is too noisy?
EDIT:
Here's what my camera data looks like (I'm performing FFT on this):
It looks like you have two problems going on here:
1) The FFT output often places the value for negative frequencies to the right of the positive frequencies, which seems to be the case here. Therefore, you need to move the right half of the FFT to the left, and put freq=0 in the middle.
2) In the comments you say that you're plotting the magnitude but that's clearly not the case (the magnitude should be greater than 0 and symmetric). Instead you're probably just plotting the really part. Instead, take the magnitude, or Re*Re + Im*Im, where Re and Im are the real and imaginary parts respectively. (Depending on the form of your numbers, something like Math.sqrt(Math.pow(a.re, 2) + Math.pow(a.im, 2)).)

How to implement movement speed of object correctly?

So I am coing an android game and have managed to make a ball roll over the screen in the direction you tilt your phone. However I would like to make the ball roll faster the more you tilt your screen.
But what is the best way to implement this? Taking bigger steps is obv not good, it makes collisions hard to calculate. I want to move more steps per second instead.
So lets say you have a tiled board and you implement speed as tiles/millisecond. But that is problematic also speed will not be continous. You'd perhaps move 1 step every 10th time in a loop instead of every time in the loop. So you would move, then be still, then move, etc instead of continously moving. But maybe that is as good as it gets?
So this problem applies generally for any kind if computer graphics I guess. How do you implement this the best way? I'm specifically interested in what applies to Android.
The natural way of implementing speed and position problems is to have position calculated with the speed that way :
position = speed * dt
with dt constant, adapted for your implementation.
So basically the natural way is to increase the step. You say it's obviously bad for collision detection but with a limited speed and a small dt I don't really see why.

Android collision detection accuracy

I have one Sprite and multiple bitmaps that work as bullets. Now the problem is that I want the bullet to go really fast but that again brings a problem to my collision detection function, The way it works is that every frame I create a Rect at both the enemy and the bullet and check for overlaps.
Now if a bullet goes really fast it like "jumps" from point to point and that means if a enemy is small and the bullet "jumped" over the enemy it wouldn't get noticed.
What I wanna know is if there is a way to like detect if a collision is going to happen between 2 moving objects or just look if a enemy is in the trajectory of the bullet.
I made the collision detection by looping through an array of enemies and inside of that for loop I loop all bullets and then create a rect for both and check for collisions with Rect.intersects for each enemy and bullet.
The bullets heading is measured by one fixed point and touch input and then calculated by this function:
public void calcPoint(float x, float y) {
double alfa = Math.atan(x / y);//x and y are inputs
bulletPointX = (float) Math.sin(alfa) * speed;
bulletPointY = (float) Math.cos(alfa) * speed;
}
I have no idea how to fix it and I'd like some adivice how to do it and maby an example...
There are several ways to do this, but the easiest is probably predictive collision detection. Basically, you split the movement into time segments and check for each segment. It may not be very efficient if you have a few thousand objects moving around, but for most cases it'll work just fine.
It's known by a few names, also. The link I gave can help with the basics, but to learn more, do a search for:
Frame Independent Collision Detection
Continuous Collision Detection
Sweep Testing

Implementing Anti-Aliasing (FSAA?) on Android, OpenGL

I'm currently using OpenGL on Android to draw set width lines, which work great except for the fact that OpenGL on Android does not natively support the anti-aliasing of such lines. I have done some research, however I'm stuck on how to implement my own AA.
FSAA
The first possible solution I have found is Full Screen Anti-Aliasing. I have read this page on the subject but I'm struggling to understand how I could implement it.
First of all, I'm unsure on the entire concept of implementing FSAA here. The article states "One straightforward jittering method is to modify the projection matrix, adding small translations in x and y". Does this mean I need to be constantly moving the same line extremely quickly, or drawing the same line multiple times?
Secondly, the article says "To compute a jitter offset in terms of pixels, divide the jitter amount by the dimension of the object coordinate scene, then multiply by the appropriate viewport dimension". What's the difference between the dimension of the object coordinate scene and the viewport dimension? (I'm using a 800 x 480 resolution)
Now, based on the information given in that article the 'jitter' coordinates should be relatively easy to compute. Based on my assumptions so far, here is what I have come up with (Java)...
float currentX = 50;
float currentY = 75;
// I'm assuming the "jitter" amount is essentially
// the amount of anti-aliasing (e.g 2x, 4x and so on)
int jitterAmount = 2;
// don't know what these two are
int coordSceneDimensionX;
int coordSceneDimensionY;
// I assume screen size
int viewportX = 800;
int viewportY = 480;
float newX = (jitterAmount/coordSceneDimensionX)/viewportX;
float newY = (jitterAmount/coordSceneDimensionY)/viewportY;
// and then I don't know what to do with these new coordinates
That's as far as I've got with FSAA
Anti-Aliasing with textures
In the same document I was referencing for FSAA, there is also a page that briefly discusses implementing anti-aliasing with the use of textures. However, I don't know what the best way to go about implementing AA in this way would be and whether it would be more efficient than FSAA.
Hopefully someone out there knows a lot more about Anti-Aliasing than I do and can help me achieve this. Much appreciated!
The method presented in the articles predates the time, when GPUs were capable of performing antialiasing themself. This jittered rendering to a accumulation buffer is not really state of the art with realtime graphics (it is a widely implemented form of antialiasing for offline rendering though).
What you do these days is requesting an antialiased framebuffer. That's it. The keyword here is multisampling. See this SO answer:
How do you activate multisampling in OpenGL ES on the iPhone? – although written for the iOS, doing it for Android follows a similar path. AFAIK On Android this extension is used instead http://www.khronos.org/registry/gles/extensions/ANGLE/ANGLE_framebuffer_multisample.txt
First of all the article you refer to uses the accumulation buffer, whose existence I really doubt in OpenGL ES, but I might be wrong here. If the accumulation buffer is really supported in ES, then you at least have to explicitly request it when creating the GL context (however this is done in Android).
Note that this technique is extremely inefficient and also deprecated, since nowadays GPUs usually support some kind of multisampling atialiasing (MSAA). You should research if your system/GPU/driver supports multi-sampling. This may require you to request a multisample framebuffer during context creation or something similar.
Now back to the article. The basic idea of this article is not to move the line quickly, but to render the line (or actually the whole scene) multiple times at very slightly different (at sub-pixel accuracy) locations (in image space) and average these multiple renderings to get the final image, every frame.
So you have a set of sample positions (in [0,1]), which are actually sub-pixel positions. This means if you have a sample positon (0.25, 0.75) you move the whole scene about a quarter of a pixel in the x direction and 3 quarters of a pixel in the y direction (in screen space, of course) when rendering. When you have done this for each different sample, you average all these renderings together to gain the final antialiased rendering.
The dimension of the object coordinate scene is basically the dimension of the screen (actually the near plane of the viewing volume) in object space, or more practically, the values you passed into glOrtho or glFrustum (or a similar function, but with gluPerspective it is not that obvious). For modifying the projection matrix to realize this jittering, you can use the functions presented in the article.
The jitter amount is not the antialiasing factor, but the sub-pixel sample locations. The antialiasing factor in this context is the number of samples and therfore the number of jittered renderings you perform. And your code won't work, if I assume correctly and you try to only jitter the line end points. You have to draw the whole scene multiple times using this jittered projection and not just this single line (it may work with a simple black background and appropriate blending, though).
You might also be able to achieve this without an accum buffer using blending (with glBlendFunc(GL_CONSTANT_COLOR, GL_ONE) and glBlendColor(1.0f/n, 1.0f/n, 1.0f/n, 1.0f/n), with n being the antialiasing factor/sample count). But keep in mind to render the whole scene like this and not just this single line.
But like said this technique is completely outdated and you should rather look for a way to enable MSAA on your ES platform.

Categories

Resources