how to implement panning in opengl program android - android

I am having a program in opengl es, in which a cube is getting rotated on user-touch along X,Y,Z axis.
Now I want to implement panning feature for the same (when user touches the cube with atleast 2 fingers--it wil get panned along the axis)...
Please anyone suggest me where do I start, I have searched too much on google stil unable to find any satisfactory example.
My code is similar to the code in API demos application which is by default installed on the emulator in which a cube is shown.

Panning can refer to a number of different things in a 3D context, but in general you're just going to be moving the camera around. This is just a matter of how you set up your modelview matrix; I recommend using some sort of 'lookat' function to calculate this, and simply change your eye coordinate.
For the most natural panning, the coordinate should be modified by multiplying your screenspace displacement vector by your existing modelview matrix with w=0, and add the resulting vector to your eye coordinate.

Related

How to detect clicks (touch) on specific parts of 3d model?

I need to load a 3d model to my app (is not a game, not that it makes any difference) and detect when the user touches specific parts of this model, to take different actions.
How can I do this? Is it possible?
I'm not familiar with Rajawali, but GitHub describes it as an OpenGL ES framework. As you described it in the comment above, you'll need to consider two basic user actions, and one action I'll add as helpful:
Swipe across the screen in some direction: change in X, change in Y.
Touch at some (x,y) point on screen with the car in some orientation.
(Optional) Zoom in/out to make it easier for a user to select small features such as side mirrors.
Depending on what OpenGL ES details Rajawali exposes, you'll need to do one or both of the following:
Learn about the four matrices that determine how a 3D scene is rendered on a 2D screen.
Find the Rajawali functions with names such as "lookAt" or "setViewpoint," and learn how to pass screen gesture info to these functions.
You can read about the four OpenGL matrices at length elsewhere. Even if Rajawali simplifies the coding a bit you should learn a bit about those matrices. Although your first inclination is to change the "model" matrix that affects the object's position and orientation, it's more likely that you'll be manipulating the "view" matrix that determines the point and direction in space from which the user sees the car. That is, the car will actually remain centered at (0,0,0), and the user's swipes, touches, and pinches will change the viewpoint.
Constraining movements so that the vehicle is always centered is nice both because your code can be a little simpler, and also because the user can't "lose" the car by sliding the viewpoint too far to one side.
The simplest change of viewpoint is a zoom, which in most iterations means simply changing the Z translation of the viewpoint matrix. Rajawali may make this simpler by providing zoomIn() and zoomOut() functions. Otherwise you'll need to do this:
In the callback or "event handler" provided by Rajawali/Android for a pinch, get the pinch-in or pinch-out value.
Call the Rajawali zoomIn() or zoomOut() function, if it exists. You will likely need to scale the value so that the amount of pinch matches expectations for zooming in and out of a car model.
Alternately, set the Z translation component of the view matrix.
Converting an (x,y) 2D screen touch point to a ray cast into 3D space can be tricky if Rajawali doesn't provide an appropriate function called something like "screenToWorld" that accepts a point in 2D screen space and a 3D point or 3D ray in world space. Spend time googling for "ray casting" for Rajawali. Here's a brief overview of what the code will need to do:
Convert a 2D touch point into a 3D ray pointed into the screen.
Check for the intersection(s) of the 3D ray and various subobjects.
(Optional) Change the color or otherwise highlight the selected object.
OpenGL does not provide a ray casting function, and I don't recommend implementing it on your own unless you have no choice. Various frameworks that wrap around or supplement OpenGL may provide this function. OpenGL coders will fault me for this description, but from memory here's how to convert a 2D touchpoint into a 3D ray pointing into the screen:
Get the (x,y) 2D screen touch point from a "touch" or "click" callback or event handler in Rajawali or Android.
Convert the 2D touch point to a 3D point. If I remember, this means setting Z to some value such as -1, 0, or 1. This is the base point of the ray.
Define a second 3D point with a different Z value. This is a far point of the ray.
Use the screen, projection, and view matrices to transform the 3D points into "world" space.
Given the 3D world coordinates for your base point and far point, use ray-object intersection to determine what object is intersected.
Again, Rajawali may provide some function that determines which object(s) are intersected by the ray. If multiple objects are returned, then pick the closest object. Since your vehicle is already subdivided into multiple subobjects this shouldn't be too hard. Implementing pinch-to-zoom can make it easier for a user to select a small object.
Swiping is analogous to a mouse move for OpenGL, and many starter projects for OpenGL describe how to convert a mouse move to a rotation. Assuming for the moment that the model rotates only about the vertical axis from the ground through the roof, then you simply need to change left/right swipes to positive/negative rotations about what in OpenGL is typically the Y-axis.
From Android/Rajawali, handle the "swipe" event handler or callback. This is analogous to a "mouseMove" function.
Translate the left/right swipe into a negative/positive value.
Call the rotateAboutY() function, if available, OR apply a rotation to the viewpoint matrix (which I won't describe here).
Given all that, I would suggest the following approach:
See if Rajawali provides convenience functions to convert screen coordinates to a world ray, to convert a screen swipe to a rotation, and to test a ray intersection with a series of objects.
Even if Rajawali provides these functions, read a little bit about the low-level OpenGL ES underneath, and the four matrices: screen, perspective, viewpoint, and model.
If Rajawali doesn't provide the convenience functions, look for a framework that does OR see if some other library that works with Rajawali can provide these convenience functions.
If you can't change frameworks or find a framework that hides the messy details, plan to spend a week or more studying OpenGL closely. You probably don't need to know about shaders, textures, etc., but you will need to understand the OpenGL 3D space, the four matrices, and so on.

How to use the numbers from Game Rotation Vector in Android?

I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRo‌tationMatrixFromVecto‌r converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.

Difference between Camera.translate and Matrix.preTranslate or Matrix.postTranslate?

We use Camera to do 3D transformations in canvas.We usually rotate camera and get it's Matrix then translate it.But Camera also has translate method.The results of using methods are different.
My question is : What is difference between Camera.translate and Matrix.preTranslate or Matrix.postTranslate?
The reason there are both, is because matrix multiplication must be done in a certain order to achieve the proper result (as you may already know).
The sequence of translations/rotations/scales are done in reverse order as you type them.
So if you do something like this:
Camera.rotate(15, 0, 0);
Camera.scale(.5f, .5f, .5f);
Camera.translate(70, 70, 70);
You're first translating 70,70,70 then scaling by 50% in all directions, then rotating 15 degrees about the X axis.
So Matrix has a pre and post translate (well, pre and post everything), because maybe you want to actually rotate it first by 15 degrees and then translate it, and then finally scale it.
So that answers the pre and post translates. Now the reason Camera has a straight rotate and translate is for people that know how this works already (like me!), so I never use Matrix or Camera for that matter, because I can simply do my rotations and translations directly on the Canvas. You can too as long as you know that translations, scales, and rotates are done in reverse order.
Also, if you know what I have told you, it gives you more power. You can do a sequence of 10 matrices without surrounding them in multiple Matrix objects for each one (for example you want to do a swing motion that swings outward AND rotates about the center to simulate centrifugal force). This would need to be done with multiple rotates and translations (surrounded by multiple Matrix objects being passed into one another), but if you know how each translate works, you can simply do a series of .translate(), .rotate(), and .scale().
This information is especially useful if you ever do 3D graphics, because that's when these matrices give people headaches.
I hope this helps!
The result would be visually the same if you i.e. do not touch the canvas but rotate the camera 90 degs or keep camera still but rotate the canvas it looks at by -90 degs.

how to move camera in order to provide rotation and scaling of a 3d object in android?

i have created a cube. now i want to perform rotation, zooming and panning functions by moving camera. like moving camera far will zoom out and near will zoom in.
please help as i am new in android and openGL-es.
There is no such thing as camera in OpenGL. There are only two transformation matrices: model-view and projection. First you have to setup your projection matrix. You can do that using glFrustum or manually. Read this article about projections.
Then in order to fake the camera behavior you need to use inverse transformation matrix. It means that if you want to move your camera for (0,0,-5) you need to move the whole world for (0,0,5). The same is with rotation and scaling.
You should read the OpenGL Red Book, it is all described there.

How to create 3D rotation effect in Android OpenGL?

I am currently working on a 3D model viewer for Android using OpenGL-ES. I want to create a rotation effect according to the gesture given.
I know how to do single-axis rotation, such as rotate solely on the x-, y- or z-axis. However, my problem is that I don't know how to combine them all 3 together and have my app know in which axis I want to rotate depending on the touch gesture.
Gestures I have in mind were:
Swipe up/down for x-axis
Swipe left/right for y-axis
swipe in circular motion for z-axis
How can I do this?
EDIT: I found out that 3 types of swipes can make the moment very ugly. Therefore what I did was remove the z-axis motion. After removing that condition, I found that the other 2 work really well in conjunction with the same algorithm.
http://developer.android.com/resources/articles/gestures.html has some info on building a 'gesture library'. Not checked it out myself, but seems to fit what you're looking for
Alternatively, try something like gestureworks.com (again, I've not tried it myself)
I use a progressbar view for zooming in and out. I then use movement in x on the GLSurfaceView to rotate around the y axis and movement in y to rotate around the x axis
The problem with a gesture is that the response is not instant as the app tries to determine what gesture the user used. If you then use how far the user moves their finger to determine the amount to rotate/zoom then there is no instant feedback and so it takes time for the user to learn how to control rotate/zoom amount. I guess it would work if you were rotating/zooming by a set amount each gesture
It sounds like what you are looking to do is more math intensive than you might know. There are two ways to do this mathematically. (1) using quaternions (2) using basic linear algebra (but will result in gimbal lock if you arent careful.. but since you are just spinning then this is not a concern to you).. Lets go the second route since its easier.. What you need to do is recieve the beginning and end points of the swipe via a gesture implement and when you have those two points.. calculate the line that it makes. When you have that line, you can easily find the perpendicular vector to that line with high school math. That should now be your axis of rotation in your rotation matrix:
//Rotate around the axis based on the rotation matrix (rotation, x, y, z)
gl.glRotatef(rot, 1.0f, 1.0f, 0.0f);
No need for the Z rotation since you can not rotate in the Z plane with a 2D tablet. The 1.0f, 1.0f are the values that should be variables that represent the x,y of your vector. The (rot) should serve as the magnitude of the distance between the two points.
I havent done this in a while so let me know if you need more precision.

Categories

Resources