I'm developing an Android application with OpenGL and JNI (all OpenGL stuff is in C code).
Imagine I've drawn a cube. I want that user can push his finger over the cube and can rotate the cube and move it around the screen.
Is there any way to do that?
How can assign an event listener to touch and move events only when the user touch the cube?
UPDATE I want something like this:
Rotate cube with fingers
Thanks.
This is called "picking" in 3d-ville... There are a number of tutorials on the subject floating hither and yon. There's even another question (sans the JNI spin) here on StackOverflow.
Also, check out this google IO video on developing android games to see why your approach may not be faster than pure Java... It Depends.
It turns out that JNI calls are Quite Expensive, so a JNI-based renderer could end up slower than a pure-java one unless you are Very Careful. YM Will V.
I'm pretty sure you'll have to listen to all touches then change behavior based on what is being touched. I suppose you could compute your cube's bounding box on screen and then monkey with your listeners every frame (or every time it moves), but I seriously doubt that would be the most efficient course to take. Listen for all touches, react appropriately.
Related
I need to load a 3d model to my app (is not a game, not that it makes any difference) and detect when the user touches specific parts of this model, to take different actions.
How can I do this? Is it possible?
I'm not familiar with Rajawali, but GitHub describes it as an OpenGL ES framework. As you described it in the comment above, you'll need to consider two basic user actions, and one action I'll add as helpful:
Swipe across the screen in some direction: change in X, change in Y.
Touch at some (x,y) point on screen with the car in some orientation.
(Optional) Zoom in/out to make it easier for a user to select small features such as side mirrors.
Depending on what OpenGL ES details Rajawali exposes, you'll need to do one or both of the following:
Learn about the four matrices that determine how a 3D scene is rendered on a 2D screen.
Find the Rajawali functions with names such as "lookAt" or "setViewpoint," and learn how to pass screen gesture info to these functions.
You can read about the four OpenGL matrices at length elsewhere. Even if Rajawali simplifies the coding a bit you should learn a bit about those matrices. Although your first inclination is to change the "model" matrix that affects the object's position and orientation, it's more likely that you'll be manipulating the "view" matrix that determines the point and direction in space from which the user sees the car. That is, the car will actually remain centered at (0,0,0), and the user's swipes, touches, and pinches will change the viewpoint.
Constraining movements so that the vehicle is always centered is nice both because your code can be a little simpler, and also because the user can't "lose" the car by sliding the viewpoint too far to one side.
The simplest change of viewpoint is a zoom, which in most iterations means simply changing the Z translation of the viewpoint matrix. Rajawali may make this simpler by providing zoomIn() and zoomOut() functions. Otherwise you'll need to do this:
In the callback or "event handler" provided by Rajawali/Android for a pinch, get the pinch-in or pinch-out value.
Call the Rajawali zoomIn() or zoomOut() function, if it exists. You will likely need to scale the value so that the amount of pinch matches expectations for zooming in and out of a car model.
Alternately, set the Z translation component of the view matrix.
Converting an (x,y) 2D screen touch point to a ray cast into 3D space can be tricky if Rajawali doesn't provide an appropriate function called something like "screenToWorld" that accepts a point in 2D screen space and a 3D point or 3D ray in world space. Spend time googling for "ray casting" for Rajawali. Here's a brief overview of what the code will need to do:
Convert a 2D touch point into a 3D ray pointed into the screen.
Check for the intersection(s) of the 3D ray and various subobjects.
(Optional) Change the color or otherwise highlight the selected object.
OpenGL does not provide a ray casting function, and I don't recommend implementing it on your own unless you have no choice. Various frameworks that wrap around or supplement OpenGL may provide this function. OpenGL coders will fault me for this description, but from memory here's how to convert a 2D touchpoint into a 3D ray pointing into the screen:
Get the (x,y) 2D screen touch point from a "touch" or "click" callback or event handler in Rajawali or Android.
Convert the 2D touch point to a 3D point. If I remember, this means setting Z to some value such as -1, 0, or 1. This is the base point of the ray.
Define a second 3D point with a different Z value. This is a far point of the ray.
Use the screen, projection, and view matrices to transform the 3D points into "world" space.
Given the 3D world coordinates for your base point and far point, use ray-object intersection to determine what object is intersected.
Again, Rajawali may provide some function that determines which object(s) are intersected by the ray. If multiple objects are returned, then pick the closest object. Since your vehicle is already subdivided into multiple subobjects this shouldn't be too hard. Implementing pinch-to-zoom can make it easier for a user to select a small object.
Swiping is analogous to a mouse move for OpenGL, and many starter projects for OpenGL describe how to convert a mouse move to a rotation. Assuming for the moment that the model rotates only about the vertical axis from the ground through the roof, then you simply need to change left/right swipes to positive/negative rotations about what in OpenGL is typically the Y-axis.
From Android/Rajawali, handle the "swipe" event handler or callback. This is analogous to a "mouseMove" function.
Translate the left/right swipe into a negative/positive value.
Call the rotateAboutY() function, if available, OR apply a rotation to the viewpoint matrix (which I won't describe here).
Given all that, I would suggest the following approach:
See if Rajawali provides convenience functions to convert screen coordinates to a world ray, to convert a screen swipe to a rotation, and to test a ray intersection with a series of objects.
Even if Rajawali provides these functions, read a little bit about the low-level OpenGL ES underneath, and the four matrices: screen, perspective, viewpoint, and model.
If Rajawali doesn't provide the convenience functions, look for a framework that does OR see if some other library that works with Rajawali can provide these convenience functions.
If you can't change frameworks or find a framework that hides the messy details, plan to spend a week or more studying OpenGL closely. You probably don't need to know about shaders, textures, etc., but you will need to understand the OpenGL 3D space, the four matrices, and so on.
i'm making my first 2d sidescroller game using a surfaceview and a canvas to draw things on (a lot of primitives put in different path objects).
my game loop uses fixed timesteps with linear interpolation. i don't create any objects during the game. i've been improving my code for 3 weeks now, but my animation is still not all the time smooth. it's ok, but every few seconds there are a lot of little hicks for about 1 or 2 seconds.
what i recognized is when i move my player (this means touching the screen), the little hicks disappear for as long as i touch the screen and move my player.
this means as long as ontouchevent of the surfaceview is called, the animation is smooth.
i dont understand this and i want a smooth animation. can somebody help me?
This sounds like a known issue on certain devices. See e.g.:
Android SurfaceView slows if no touch events occur
Animation glitches while rendering on SurfaceView
Android thread performance/priority on Galaxy Note 2 when screen is touched/released
The problem is that the system is aggressively reducing clock speeds to save power when it doesn't detect interaction with the user. (Qualcomm in particular seems fond of this.) The workaround is to drop frames when necessary. See this article on game loops, and a Choreographer-based trick demonstrated in Grafika's "record GL app" activity (in doFrame()).
Is it possible to detect every pixel being touched? More specifically, when the user touches the screen, is it possible to track all the x-y coordinates of the cluster of points touched by the user? How can I tell the difference between when users are drawing with their thumb and when they are drawing with the tip of a finger? I would like to reflect the brush difference depending on how users touch the screen, and would also like to track x-y coordinates of all the pixels being touched over time. Thanks so much in advance for any help.
This would be very tricky primarily because every android phone is going to behave differently. There are some touch screen devices that are very, very sensitive and some that are basically "dull" by comparison.
It also sounds more like you are wanting to track pressure - how hard is the user pushing on the screen - which is actually supported on android devices.
I think some of your answer may be found by monitoring all of the touch events - in practice, most applications ignore a great number of events or perform some kind of "smoothing" of the events since there is literally a deluge of touch events when the user is manipulating the screen. Doing this may negatively impact your applications performance though.
I would recommend that you look into pressure sensitivity and calculate a circular region around the primary touch point based on pressure, then build your brush around that.
Another idea would be to incorporate more of a gesture approach to what you are trying to do - for example, visualize touching the screen with the tip of two fingers together (index and middle) and rolling the middle finger around the index finger or simply moving the middle finger up and down in relation to the index finger. Both fingers would be moved together for painting. This could be used to manipulate drawing angle on the fly or perhaps even toggle between a set of pre-selected brushes or could change brush size on the fly as you are painting.
Some of the above ideas I would love to see implemented - let me know when you have your app ready.
Good luck!
Rodney
If you have a listener on your image it will respond that there was a touch within that bounding box, basically.
So, to get what you want, you could, but, I would never do this, create a box around every pixel, or small group of pixels, and listen for a touch.
Wherever you get a touch, it may fire off an event, then you can react accordingly.
I can't think of any other solution that will give you each pixel that a person touched, at one time.
You may want to read up on multitouch though, as there are some suggestions in here that my help you:
http://android-developers.blogspot.com/2010/06/making-sense-of-multitouch.html
If you're looking for a way to get your content view as a View after Activity#setContentView(int), then you can set an id on the outer-most element of your layout:
android:id="#+id/entire_view" and reference it in your onCreate() method after setContentView:
View view = getViewById(R.id.entire_view);
view.setOnTouchListener( ... );
I am developing a 2D, underwater, action-RPG for Android, using Box2D as the physics engine, mainly for collision detection, collision response and movement of in-game characters within an environment comprised of walls, rocks, and other creatures.
I have tried two different approaches for implementing character animations using Box2D, and have found issues with both. As I'm new to Box2D and physics engines, I would appreciate a recommendation on how these things should best be done.
An example of an animation I am trying to do is as follows:
A fish wants to attack another fish, so does the following:
1) Move towards target at speed
2) Take a bite out of target creature
3) Turn and flee, back to where the attack began
4) Turn back to face the target, ready for another attack
The two approaches I've tried are:
A) Apply a force to the attacker (using body.applyForce() ) to move it towards the target, then another force to move it back again, after the collision
Problems:
* Frequently the attacker hits the target and bounces off and goes hurtling back at great speed, and bounces off walls, everywhere. The speed is pretty random, depending on where it impacts the target, the mass of the target, etc. It breaks the animation and looks terrible.
* It's very hard to figure out what forces should be applied to the attacker and when, to simulate a particular animation in a physics world so it looks realistic
B) Directly set the position of the attacker (using body.setTransform() ) to move the attacker to the correct position, as it moves forwards each step, then moves back again.
Problems:
* Directly setting the position allows the attacker to ignore collisions with walls and other creatures, so getting stuck in a wall is common
* If the player is attacking, I update the world origin as the player moves, to keep the player mid-screen. This works well, except when I start an animation, as I don't want the screen to follow the animation, but only the movement component of the existing velocity, which I don't know, as I'm overriding the Box2D forces/velocities when I set the position. It's possible to do this I'm sure, but difficult - maybe I'm missing something obvious.
Should I be monitoring the collisions? Overriding the collision response?? Something else?
So, how would you recommend I approach this problem?
I'm only used to work with Farseer, but Farseer is a pretty direct port from Box2D, so I hope this answer is still helpful.
Next to applying force and teleporting you can also set the linear movement speed of a body. This way you can have the fish move towards the player without applying force. You should capture the collision events from the fish body and compare in each event if the fish has hit the player, then set false/NoCollision in the collision event with the player so that it doesn't bounce of. Now set the fish body to ignore any collisions with the player body and use a fixed joint to stick the fish to the player. You can now play your bite animation.
After the bite animation you want to disengage the fish. Start your flee animation, remove the joint and teleport the fish to the edge of the player body (so that it's not colliding with it). After that re-enable collisions between the fish and the player body and send the fish away from the player (either by setting linear movement speed again, or for a nice bounce effect via force.
I am recently getting into Android programming and want to make a simple game using 2D canvas drawing. I have checked out the Lunar Lander example and read up on some gestures, but it looks like I can only detect if a gesture occurred. I am looking to do a little more complicated detection on a swipe:
I want to make a simple game where a user can drag their finger through one or more objects on the screen and I want to be able to detect the objects that they went over in their path. They may start going vertically, then horizontally, then vertically again, such that at the end of a contiguous swipe they have selected 4 elements.
1) Are there APIs that expose the functionality of getting the full path of a swipe like this?
2) Since I am drawing on a Canvas, I don't think I will be able to access things like "onMouseOver" for the items in my game. I will have to instead detect if the swipe was within the bounding box of my sprites. Am I thinking about this correctly?
If I have overlooked an obvious post, I apologize. Thank you in advance!
I decided to implement the
public boolean onTouchEvent(MotionEvent event)
handler in my code for my game. Instead of getting the full path, I do a check to see which tile the user is over each time the onTouchEvent fires. I previously thought this event fired only once on the first touch, but it fires as long as you are moving along the surface of the screen, even if you haven't retouched.