I have a cube in my OpenGL view which I can rotate when I touch outside of the cube. Now I can detect which side of the cube I touch and now I'm trying to find out towards what side the touch movement is going. So if I touch the top side, the movement can go towards LEFT, RIGHT, FRONT or BACK side of the cube. The cube's orientation can be anything when doing this.
so after figuring that out I'd make it rotate around the correct axis.
I just need an idea of how to implement this.
EDIT:
Here's a crude example of what I'm trying to do. Sorry, I'm bad at explaining.
the green thing is a finger touching the red side. The arrow shows the direction the finger is moving. Since it's moving towards the blue side (2), it should return 2. If it moved towards the green side (1), it would return 1 and so on.
Here is an approach:
When you swipe and lift your finger up, you get two 2D points in screen space: ptBegin and ptEnd. Convert these to 3D (you will need to do an equivalent of gluUnproject to get the 3D coords) and you will get the 3D coordinates ptBegin3D and ptEnd3D in the cube's coordinate system.
Calculate the vector D = ptEnd3D - ptBegin3D.
Now, if you do a dot product of above with the cube face normals (0, 0, 1), (1, 0, 0), etc., you will know from the value which cube face you are moving towards.
Related
I am developing an app where the user can draw onto a Bitmap (with MotionEvents and Canvas), which then gets projected onto an OpenGL Surface as a texture. The issue is, I'm having some trouble to correctly offset the pixel where the user touched (Position [x,y] from the top-left corner of the screen, given by MotionEvent), onto the correct place that the Bitmap should be drawn, considering both the position and scaling of the View Matrix (maybe the projection as well). I'm unsure if there's a simple way to mathematically relate screen pixels with GL's Normalized Device Coordinates, which would make things easier.
Example:
Let's say the user touched the top-left corner of the screen (0,0). From view transformations (scaling and translating), however, such corner happens to be exactly at the center of the projected Bitmap. How can I offset this (0,0) position so that it draws in the center of it?
(Also, if what I'm trying to do is extremely impractical, and there's a much easier way, it would be welcome to know).
I am trying to develop an Android app which is trying to draw a perfect line directly in front of me. My reference point is my phone, which means that the line has to parallel to my phone's left side.
I have following issues:
I am using Sceneform, so there is no "onSurfaceCreated" callback.(?)
I assume that the white-dots shows the surface. But, then I think if there is a detected surface then I can place a Shape on it. But is can not happen sometimes. And sometimes, I can place a Shape even if there are no visible white-dots.
When I try to draw a line between the points (0,0,0) to (1,0,0), it is not always parallel to the left side of my phone. I assume that the reason of this is related with the following fact :
angle between the left-bottom corner of the detected surface and the left-top corner is not zero. (If we consider the coordinate system as follows : phone's left side is y-axis, bottom is the x-axis.)And this angle changes each time I reopen the app.
These are more theory questions than the implementation. So, I need someone to prove or disprove, or give me guideline.
1) There isn't method like onSurfaceCreated.
2) Not all the detected planes are covered with white-dots. Is is intended because if all the detected planes are rendered with white-dots, it would confuse the users
3) When you talk about the points(0,0,0) and (1,0,0), is it world position or local position? Whether it is world position or local position, you can not draw a line parallel to your left side of phone in the way you approach.
How does Matrix.setLookAtM work? I've been searching all over, and can't find an explanation. I understand that the first three coordinates are to define the location of the camera in the world space, and I take it that "center of view" means the x, y, z coordinate I'm looking at in the world space. That being the case, what does the "up vector" mean/do?
If there is a previous question or tutorial that I've overlooked, I would be happy to accept that.
Up vector is what the camera considers "up", i.e.: If you were looking forward and held your hand up, that is your "up" vector. Just set it to 0, 1, 0. I'm not an Android developer, but I'm guessing it's similar to gluLookAt().
What the function is really doing is setting up a view matrix for you. It needs the eye position to establish where the camera will be. After that, it will subtract eye position from center and normalize it to get a forward vector. Then it will cross the forward vector with the up vector to get a right vector. After normalizing all three, it can construct a matrix from those x, y, z vectors giving you a basic model view matrix.
It just discretizes the math for you.
I am currently working on a 3D model viewer for Android using OpenGL-ES. I want to create a rotation effect according to the gesture given.
I know how to do single-axis rotation, such as rotate solely on the x-, y- or z-axis. However, my problem is that I don't know how to combine them all 3 together and have my app know in which axis I want to rotate depending on the touch gesture.
Gestures I have in mind were:
Swipe up/down for x-axis
Swipe left/right for y-axis
swipe in circular motion for z-axis
How can I do this?
EDIT: I found out that 3 types of swipes can make the moment very ugly. Therefore what I did was remove the z-axis motion. After removing that condition, I found that the other 2 work really well in conjunction with the same algorithm.
http://developer.android.com/resources/articles/gestures.html has some info on building a 'gesture library'. Not checked it out myself, but seems to fit what you're looking for
Alternatively, try something like gestureworks.com (again, I've not tried it myself)
I use a progressbar view for zooming in and out. I then use movement in x on the GLSurfaceView to rotate around the y axis and movement in y to rotate around the x axis
The problem with a gesture is that the response is not instant as the app tries to determine what gesture the user used. If you then use how far the user moves their finger to determine the amount to rotate/zoom then there is no instant feedback and so it takes time for the user to learn how to control rotate/zoom amount. I guess it would work if you were rotating/zooming by a set amount each gesture
It sounds like what you are looking to do is more math intensive than you might know. There are two ways to do this mathematically. (1) using quaternions (2) using basic linear algebra (but will result in gimbal lock if you arent careful.. but since you are just spinning then this is not a concern to you).. Lets go the second route since its easier.. What you need to do is recieve the beginning and end points of the swipe via a gesture implement and when you have those two points.. calculate the line that it makes. When you have that line, you can easily find the perpendicular vector to that line with high school math. That should now be your axis of rotation in your rotation matrix:
//Rotate around the axis based on the rotation matrix (rotation, x, y, z)
gl.glRotatef(rot, 1.0f, 1.0f, 0.0f);
No need for the Z rotation since you can not rotate in the Z plane with a 2D tablet. The 1.0f, 1.0f are the values that should be variables that represent the x,y of your vector. The (rot) should serve as the magnitude of the distance between the two points.
I havent done this in a while so let me know if you need more precision.
I'm very new to using GL, so please, bear with me!
I have a plane with a cube resting on top of it. I have 3 SeekBars in the Activity which allow the user to rotate along the x and z axis (aka tilt and rotate), and one to "zoom-in" (aka translate on z-axis). What I'd like to do is allow the user to go into a "bird's-eye view" of the plane and drag their finger along to place a "marker" which will just be a semi-transparent circle.
When the player releases their finger, I'd like the marker to stay where the user has left it. Now, I'd like to be able to rotate the 3D scene and see that the marker is not just a flat circle, but almost like a spotlight - it interacts with other objects (ie the cube).
How would this be done? Do I need to look into something like lighting?
Look into projective texturing.