The question is simple, how can I identify which object has been touched by the user in OpenGL.
I've tried the utilizat envento onTouchEvent but this only returns the possição X, Y screen.
A similar question was asked (& answered) in this thread:
Detect user's touches over an OpenGL square
Basically there are 2 methods: 1 rendering all objects to a buffer in all different colours and then looking at the colour information at the specified 'pick coordinate' to identify your object.
The other (and I think less resource intensive) is retrieving the 'ray' and then doing a hit test with bounding boxes you provide for all your objects currently rendered on the screen.
edit:
If you're doing your rendering orthographically/2d then this somewhat simplifies things.
You can do a simple hit Test with the point you touched and a rectangle (or circle or polygon perhaps) you provide for the image that you've drawn.
Hope this helps.
Related
I need to load a 3d model to my app (is not a game, not that it makes any difference) and detect when the user touches specific parts of this model, to take different actions.
How can I do this? Is it possible?
I'm not familiar with Rajawali, but GitHub describes it as an OpenGL ES framework. As you described it in the comment above, you'll need to consider two basic user actions, and one action I'll add as helpful:
Swipe across the screen in some direction: change in X, change in Y.
Touch at some (x,y) point on screen with the car in some orientation.
(Optional) Zoom in/out to make it easier for a user to select small features such as side mirrors.
Depending on what OpenGL ES details Rajawali exposes, you'll need to do one or both of the following:
Learn about the four matrices that determine how a 3D scene is rendered on a 2D screen.
Find the Rajawali functions with names such as "lookAt" or "setViewpoint," and learn how to pass screen gesture info to these functions.
You can read about the four OpenGL matrices at length elsewhere. Even if Rajawali simplifies the coding a bit you should learn a bit about those matrices. Although your first inclination is to change the "model" matrix that affects the object's position and orientation, it's more likely that you'll be manipulating the "view" matrix that determines the point and direction in space from which the user sees the car. That is, the car will actually remain centered at (0,0,0), and the user's swipes, touches, and pinches will change the viewpoint.
Constraining movements so that the vehicle is always centered is nice both because your code can be a little simpler, and also because the user can't "lose" the car by sliding the viewpoint too far to one side.
The simplest change of viewpoint is a zoom, which in most iterations means simply changing the Z translation of the viewpoint matrix. Rajawali may make this simpler by providing zoomIn() and zoomOut() functions. Otherwise you'll need to do this:
In the callback or "event handler" provided by Rajawali/Android for a pinch, get the pinch-in or pinch-out value.
Call the Rajawali zoomIn() or zoomOut() function, if it exists. You will likely need to scale the value so that the amount of pinch matches expectations for zooming in and out of a car model.
Alternately, set the Z translation component of the view matrix.
Converting an (x,y) 2D screen touch point to a ray cast into 3D space can be tricky if Rajawali doesn't provide an appropriate function called something like "screenToWorld" that accepts a point in 2D screen space and a 3D point or 3D ray in world space. Spend time googling for "ray casting" for Rajawali. Here's a brief overview of what the code will need to do:
Convert a 2D touch point into a 3D ray pointed into the screen.
Check for the intersection(s) of the 3D ray and various subobjects.
(Optional) Change the color or otherwise highlight the selected object.
OpenGL does not provide a ray casting function, and I don't recommend implementing it on your own unless you have no choice. Various frameworks that wrap around or supplement OpenGL may provide this function. OpenGL coders will fault me for this description, but from memory here's how to convert a 2D touchpoint into a 3D ray pointing into the screen:
Get the (x,y) 2D screen touch point from a "touch" or "click" callback or event handler in Rajawali or Android.
Convert the 2D touch point to a 3D point. If I remember, this means setting Z to some value such as -1, 0, or 1. This is the base point of the ray.
Define a second 3D point with a different Z value. This is a far point of the ray.
Use the screen, projection, and view matrices to transform the 3D points into "world" space.
Given the 3D world coordinates for your base point and far point, use ray-object intersection to determine what object is intersected.
Again, Rajawali may provide some function that determines which object(s) are intersected by the ray. If multiple objects are returned, then pick the closest object. Since your vehicle is already subdivided into multiple subobjects this shouldn't be too hard. Implementing pinch-to-zoom can make it easier for a user to select a small object.
Swiping is analogous to a mouse move for OpenGL, and many starter projects for OpenGL describe how to convert a mouse move to a rotation. Assuming for the moment that the model rotates only about the vertical axis from the ground through the roof, then you simply need to change left/right swipes to positive/negative rotations about what in OpenGL is typically the Y-axis.
From Android/Rajawali, handle the "swipe" event handler or callback. This is analogous to a "mouseMove" function.
Translate the left/right swipe into a negative/positive value.
Call the rotateAboutY() function, if available, OR apply a rotation to the viewpoint matrix (which I won't describe here).
Given all that, I would suggest the following approach:
See if Rajawali provides convenience functions to convert screen coordinates to a world ray, to convert a screen swipe to a rotation, and to test a ray intersection with a series of objects.
Even if Rajawali provides these functions, read a little bit about the low-level OpenGL ES underneath, and the four matrices: screen, perspective, viewpoint, and model.
If Rajawali doesn't provide the convenience functions, look for a framework that does OR see if some other library that works with Rajawali can provide these convenience functions.
If you can't change frameworks or find a framework that hides the messy details, plan to spend a week or more studying OpenGL closely. You probably don't need to know about shaders, textures, etc., but you will need to understand the OpenGL 3D space, the four matrices, and so on.
Now , i have the circle and image on the canvas
Ping-pong board and Ping-pong ball (draw by drawCircle)
The position of ball will depend on the accelerator
is possible to detect whether the ball the outside the board or not?
Or, i need to draw the board Programmatically without using the image
What you are looking for is called 'Collision Detection'. A technique used in game programming where the boundaries (area that defines where the object can be hit) are and then if an object enters those positions.
You can do this simply by saying that the boundaries are anything in the height / width of the image on the canvas. But I suspect that in your game you will want a subsection of that.
You will need to define a related object to your image that holds the 'Collision Boundary'. On a 2D game that will be the starting X,Y and then the height and width. While on a 3D game you will also need to store the Z position.
This is probably quite confusing to start with but I found you this little guide that explains it in more detail than I have space for here:
http://www.kilobolt.com/day-4-collision-detection-part-1.html
Let me know if you have any questions and the game sounds exciting!
im new to this android things. And i have to develop an application that can help an autism to learn numbers. I have a few ideas and I've been trying to learn and implement the code. But it's failed. The question is how can i apply the motion code or sprite to draw a numbers or letter? For example like this, i wanna make the penguin move through the line and draw a number nine.
There is example from mybringback.com which is the image move to draw a rectangle. How can i implement it to draw a number? Im sorry if i asking too much, i just trying to get some ideas.
I think that you should first build an utility program, in order to create the "path vector".
What I mean by path vector is simply a vector of Points (where a point has x value, and y value). And your utility should let you draw whatever you want, with a simple pen. You should draw on surface and store points when mouse is down, and ignore points when mouse is up.
Then, in the main program, you will just have to read at the path of your number/letter.
I've tried to implement something like this for the Sugar OLPC platform, without serializing path into files : I was able to draw, and to view the animation. And I used the process I've just described you.
Hope it can help you.
P.S : I used the word mouse, but you guessed that I talk about finger ...
There are various ways to achieve animation effects. One approach that is quite versatile involves creating a custom View or SurfaceView in which you Override the onDraw method. Various tutorials can be found on this; the official Android discussion of it is here:
http://developer.android.com/guide/topics/graphics/2d-graphics.html#on-view
Your implementation will look something like this:
// Find elapsed time since previous draw
// Compute new position of drawable/bitmap along figure
// Draw bitmap in appropriate location
// Add line to buffer containing segments of curve drawn so far
// Render all segments in curve buffer
// Take some action to call for the rendering of the next frame (this may be done in another thread)
Obviously a simplification. For a very simplistic tutorial, see here:
http://www.techrepublic.com/blog/software-engineer/bouncing-a-ball-on-androids-canvas/1733/
Note that different implementations of this technique will require different levels of involvement by you; for example, if you use a SurfaceView, you are in charge of calling the onDraw method, whereas subclassing the normal View lets you leave Android in charge of redrawing (at the expense of limiting your ability to draw on a different thread). In this respect, Google remains your friend =]
I am making an Android game app and am using openGL to load 3D obj models into the app. I would like to know if anyone can help me on how to make these objects interactive. All I really need is to make an object clickable, but it would be cool to learn more such as dragging it around the screen and such.
Thanks for any help.
What you want is called picking.
This isn't a easy task an depending on want you want to do exaclty you have mutliple options to do it.
The Problem: You want to select an Object in 3D space based on a 2D coordinate (mouse/touch position). Because of the 2D mouse coordinates you miss one coordinate to determine the exact click position in the 3D space.
One possible option would be:
Render your object with a specific color (e.g. completely red)
After that save the current display buffer to a variable
Clear the display buffer and render your model again with the standard settings (this is the screen that is displayed to the user)
determine the click/touch position translate it to a coordinate on your display area
check the color at this coordinate in your saved display buffer. If the color at this position is red, the user clicked/touch an object
This approach isn't that flexible but the implementation is very simple compared to other solutions. It is limited because you can only detect if the used clicked/touched a certain object or not but you cannot determine the exact position on the object.
Another possible option is to compute a ray on the 3D world based on the 2D click position and then determine all objects in 3D space that collide with this ray. This is called ray picking.
You can find a OpenGl tutorial for ray picking here
The example uses glRenderMode, glLoadName, etc. which maybe isn't the best choice if you are not using the fixed function pipeline (e.g. you are using custom shaders, etc.).
Another option whould be to do the math and compute the ray vector yourself based on click position, viewport and projection matrices. If you want to do this the documentation of gluUnproject can help you.
I have already implemented a cube which can rotate by gestures in Android OpenGL ES. Now I want to implement that when I click somewhere on the cube, it can tell which face has been touched and make some response.
I searched the Internet and find color picking a good way, here are some tutorials: http://www.lighthouse3d.com/opengl/picking/index.php?color1
But I still find it difficult for me.
How to assigned each face a different color?
How to read the pixel where the mouse was clicked from the back buffer?
Can anyone show me some more details? Thanks a lot!
If you don't mind, leave me an email address and I can send you the work I have done. Thanks :)
The first comment is that it's almost always faster to do this analytically — by casting a ray into the world. That comment aside...
You'd assign each face a different colour for picking just like for any other sort of rendering, whether by changing what you pass to glColorPointer (if using ES 1) or by switching to a single pixel, single coloured texture or by any other means. If you have lighting enabled, be sure to disable it.
You can use glReadPixels to read a colour back from a frame buffer. On a touch-screen device you probably want to grab, say, a 20x20 pixel area and pick whichever colour appears most often in it, or something like, that because fingers aren't very precise.