I'm using GVR Android library version 1.190 and trying to play both 360 and 180 degrees videos in the video360 example project.
In both cases the 2D view (MonoscopicView) starts the playback fine, but the viewer camera position is never centered to the center of the video. It instead starts randomly off-centered by horizontal axis. Same behavior on multiple devices.
Anyone knows how to center the view to the video center when 2D view starts?
Turns out that sensor data from Sensor.TYPE_GAME_ROTATION_VECTOR are having very different values (angles) every time my activity register a listener to it. It only takes a small tilt of the phone to get really different values. Different devices also respond differently but they all have offset readings.
This lead to the initial view angle being positioned (usually) 90 degrees either to the left or right from the center of video.
Thanks to this post, I managed to calculate the initial heading offset and rotate phone position matrix to compensate.
Add a member variable private float initialHeading with initial value 0.
Then, in PhoneOrientationListener's onSensorChanged add the following code after the Android to OpenGL matrix rotation:
if (initialHeading == 0) {
initialHeading = (float) ((angles[0] + 2 * Math.PI) % (2 * Math.PI));
}
float angle = (float) ((Math.PI - initialHeading) * 180 / Math.PI);
Matrix.rotateM(phoneInWorldSpaceMatrix, 0, angle, 0, 1, 0);
Related
my app contains an object moving on a surfaceview. I am able to move it around via accelerometer.
Here's the movement code of the player object:
if(x + mx*speed > 0 && x + mx*speed < GameView.WIDTH) {
x += mx*speed;
}
if(y+ my*speed > 0 && y+ my*speed < GameView.HEIGHT) {
y+=my*speed;
}
x and y are the player's coordinates
mx is the value the player gets from the accelerometer, for example: when tilting to the left, mx is -2, when tilting more, mx is -4, -5, -6 etc. --> my is the same for the y-axis
the speed is a variable to modify and play around when i want to have a faster movement.
as you can see I tried to limit the movement to only move when the player is inside of the view.
Now my problem is: when tilting the device intensively to the right, mx turns to something like 6. speed is set to 5. This means, when the player's position + 6 * 5 is bigger than the game view it should not move any more. But this results in the player stopping pixels in front of the right side of the view... when tilting lightly to the right, the object stops perfectly at the border of the view...
Now how should i change the code to achieve an object that stops it's movement perfectly at borders of the screen?
On this picture you can see the circle not stopping quite at the bottom, as there are some pixels between the circle and the bottom border. when going slightly back with the accelerometer, the circle aligns itself to the bottom of the screen:
But now, i can only reach the screen borders when moving slowly, which means with a low mx or my.
the screenshots you can see the mY values. On the first picture my = ca. 8 and on the second ca. 6.
Any ideas?
Thanks in advance
Try to instead cap the value to the border like so
x = Math.max(Math.min(x + mx*speed, GameView.WIDTH), 0.0f));
y = Math.max(Math.min(y + my*speed, GameView.HEIGHT, 0.0f));
I am building an android application similar to x-ray scanner (Play Store Link), which moves images smoothly on screen by moving the device left,right top and bottom.
I am using accelerometer for this, but problem is that image is not moving smoothly.
My code is below
int x1 = (int) sensorEvent.values[0]*(screenW/10);
int y1 = (int) sensorEvent.values[1]*(screenH/14);
and then in on Draw
canvas.drawBitmap(bmp, x, y, mPaint);
This is not how you use them. You should take current value and ADD to current position instead of setting position from value directly. The more you tilt - the bigger values you will get and hence the faster the image will appear to move.
You can then also apply some linear interpolation to the movement so that it appears smoother.
Here is a link to learn more about lerp (linear interpolation) in code: http://en.wikipedia.org/wiki/Linear_interpolation#Programming_language_support
i want to do a 2D game with backgrounds and sprites (views) moving on the screen.
I want to make a game with a scrolling ground. I mean the user must see a horizon in the top part of the screen filling the 30% of the screen size. The ground must be scrolling and must be the 70% of the screen size. For example, if i put a car on the ground, the car must be driving into a scrolling road and the sky (horizon) must be seen on the screen, in the top of the road, filling the 30% of the screen.
I am searching in google about scrolling games but i can't find the way to achieve this kind of scrolling ground game with horizon.
Any ideas and approaches will be grated, i'm just making a research about how to do this.
Thanks
This kind of effect can be done in various ways, here is one very basic example I can come up with.
First create a background image for your horizon - a blue sky with a sun would be good. Now create some detail images for the background, such as clouds and birds. These can move accross the background image from left to right (and/or vice-versa). In your rendering code you would render the "background" image first, and then the "detail" images. Make sure that your background image covers around 35% of the screen, so that when you render the 70% ground layer there is some overlap - preventing a hole where the two layers meet.
Next create a textured image for the ground. For this I would use a static image that has the correct type of texture for what you are trying to represent (such as dirt). It may also be good to add some basic detail to the top of this image (such as mountains, trees, etc).
This should be rendered after the background layer.
Once you have this layout in place, the next step would be to simulate the depth of your world. For this you would need to create objects (2D images) that would be placed in your "world". Some examples would be trees, rocks, houses, etc.
To define your world you would need to store 2 coordinates for each object - a position on the x-axis as well as a depth value on the z-axis (you could also use a y-axis component to include height, but I will omit that for this example).
You will also need to track your player's position on the same x and z axis. These values will change in realtime as the player moves into the screen - z will change based on speed, and x will change based on steering (for example).
Also define a view distance - the number of units away from the player at which objects will be visible.
Now once you have your world set up this way, the rendering is what will give the illusion of moving into the screen. First render your player object at the bottom of the ground layer. Next, for each world object, calculate it's distance to the player - if it's distance is within the view distance you defined then it should be rendered, otherwise it can be ignored.
Once you find an object that should be rendered, you need to scale it based on it's distance from the player. The formula for this scaling would be something like:
distance_from_player_z = object.z - player.z
scale = ( view_distance - distance_from_player_z ) / view_distance
This will result in a float value between 0.0 and 1.0, which can be used to scale your object's size. Using this, the larger the distance from the player, the smaller the object becomes.
Next you need to calculate the position on the x-axis and y-axis to render your object. This can be achieved with the simple 3D projection formulas:
distance_from_player_x = object.x - player.x
x_render = player.x + ( distance_from_player_x / distance_from_player_z )
y_render = ( distance_from_player_z / view_distance ) * ( height_of_background_img );
This calculates the distance of the object relative to the player on the x-axis only. It then takes this value and "projects" it, based on the distance it is away from the player on the z-axis. The result is that the farther away the object on the z-axis, the closer it is to the player on the x-axis. The y-axis part uses the distance away from the player to place the object "higher" on the background image.
So with all this information, here is a (very basic) example in code (for a single object):
// define the render size of background (resolution specific)
public final static float RENDER_SIZE_Y = 720.0f * 0.7f; // 70% of 720p
// define your view distance (in world units)
public final static float VIEW_DISTANCE = 10.0f;
// calculate the distance between the object and the player (x + z axis)
float distanceX = object.x - player.x;
float distanceZ = object.z - player.z;
// check if object is visible - i.e. within view distance and in front of player
if ( distanceZ > 0 && distanceZ <= VIEW_DISTANCE ) {
// object is in view, render it
float scale = ( VIEW_DISTANCE - distanceZ ) / VIEW_DISTANCE;
float renderSize = ( object.size * scale );
// calculate the projected x,y values to render at
float renderX = player.x + ( distanceX / distanceZ );
float renderY = ( distanceZ / VIEW_DISTANCE ) * RENDER_SIZE_Y;
// now render the object scaled to "renderSize" at (renderX, renderY)
}
Note that if distance is smaller than or equal to zero, it means that the object is behind the player, and also not visible. This is important as distanceZ==0 will cause an error, so be sure to exclude it. You may also need to tweak the renderX value, depending on resolution, but I will leave that up to you.
While this is not at all a complete implementation, it should get you going in the right direction.
I hope this makes sense to you, and if not, feel free to ask :)
Well, you can use libgdx (http://libgdx.badlogicgames.com/).
The superjumper example will put you in the right way :) (https://github.com/libgdx/libgdx/tree/master/demos/superjumper)
I'd like to project images on a wall using camera. Images, essentially, must scale regarding the distance between camera and the wall.
Firstly, I made distance calculations by using right triangle trigonometry(visionHeight * Math.tan(a)). It's not 100% exact but yet close to real values.
Secondly, knowing the distance we can try to figure out all panorama height by using isosceles triangle trigonometry formula: c = a * tan(A);
A = mCamera.getParameters().getVerticalViewAngle();
The results are about 30% greater than the actual object height and it's kinda weird.
double panoramaHeight = (distance * Math.tan( mCamera.getParameters().getVerticalViewAngle() / 2 * 0.0174532925)) * 2;
I've also tried figuring out those angles using the same isosceles triangle's formula, but now knowing the distance and the height. I got 28 and 48 degrees angles.
Does it mean that android camera doesn't render everything it shoots ? And, what other solutions you can suggest ?
Web search shows that the values returned by getVerticalViewAngle() cannot be blindly trusted on all devices; also note that you should take into account the zoom level and aspect ratio, see Determine angle of view of smartphone camera
If you have a "ball" inside a 2D polygon, made up of say, 4 line segments that act as bounding walls, how do you calculate the angle of the ball after the collision with the irregularly sloped wall?
I know how to make the ball bounce if the wall is horizontal, vertical, or at a 45 degree angle. I also have my code setup to detect a collision with the wall.
I've read about dot products and normals, but I cannot figure out how to implement these in Java / Android. I'm completely stumped and feel like I've looked up everything 10 pages deep in Google 10 times now. I'm burned out trying to figure this out, I hope someone can help.
Apologies in advance: I don't know the correct Android types. I'm assuming you have a vector type with properties 'x' and 'y'.
If the wall were horizontal and the current velocity were 'vector' then it'd be as easy as:
vector.y = -vector.y;
And you'd leave the x component alone. So you need to do something analogous, but more general.
You do that by substituting the idea of the line normal (a vector perpendicular to the line) for hard coding for the y axis (which is perpendicular to the horizontal).
Since the normal is orthogonal to the line, it can be found by rotating the line by 90 degrees. In 2d, the vector (a, b) can be rotated by 90 degrees by converting it to (-b, a). Hence if you have a line from (x1, y1) to (x2, y2) then you can get the normal with:
vectorAlongLine.x = x2 - x1;
vectorAlongLine.y = y2 - y1;
normal.x = -vectorAlongLine.y;
normal.y = vectorAlongLine.x;
You don't actually care how long the original line was (and it'll affect computations later when you don't want it to), so you want to make the normal be of length 1 irrespective of its current length. You can do that by dividing it by its current length. So, e.g.
lengthOfNormal = Math.sqrt(normal.x*normal.x + normal.y*normal.y);
normal.x /= lengthOfNormal;
normal.y /= lengthOfNormal;
Using the Pythagorean theorem there to get the length.
With the horizontal line, flipping on the y axis was the same as (i) working out what the extent of the vector extends along the y axis; and (ii) subtracting that amount twice — once to get the velocity to be 0 in that direction, again to make it the negative version of the original. That is, it's the same as:
distanceAlongNormal = vector.y;
vector.y -= 2.0 * distanceAlongNormal;
The dot product is used in the general case is to work how far the vector extends along the normal. So it does the same as taking vector.y does for the horizontal line. This is where you possibly have to take a bit of a leap of faith. It's a property of the dot product and you can persuade yourself by inspecting a right-angled triangle. But for now, if you had a horizontal line, you'd have ended up with the normal (0, 1). Since the dot product would be:
vector.x * normal.x + vector.y * normal.y
You'd compute:
distanceAlongNormal = vector.x * 0.0 + vector.y * 1.0;
Which is obviously the same thing as just taking the y component.
Having worked out the distance along the normal, you actually want to then subtract that amount times the normal times two. The only additional step here is multiplying by the normal to get a 2d quantity to subtract. That's because you're looking to subtract in the order of the normal. So complete code, based on a normal computed earlier, is:
distanceAlongNormal = vector.x * normal.x + vector.y * normal.y;
vector.x -= 2.0 * distanceAlongNormal * normal.x;
vector.y -= 2.0 * distanceAlongNormal * normal.y;
If you hadn't made normal of length 1, then you'd need to divide by the length here, since the dot product would scale the distanceAlongNormal value by that amount.
This might come in handy for you
http://www.tonypa.pri.ee/vectors/tut07.html