suppose take my example as PIANO image..on every key press on its keyboard i want to perform a different event..now i am not able to get where to start this..
how to perform this in android..
take below image:
As you can see in the image i have an piano image..and drawn red and blue lines in some shape..now i want to perform different event for each shape on this image..
how can i do this in android programming..
This seems like a basic Collision Detection problem.
Basically what you want to do is of course listen to the screen touches and receive the X & Y of the touch location. (You can of course use multi-touch, just remember to do this for every touch)
When a touch appears you will calculate a bunch of containsRectanglePoint for each key on the image.
So basically you will split the image into a lot of rectangles like so.
(source: gyazo.com)
Then you check if the point contains any of the rectangles.
If the touch X & Y is inside either 1 or 2 then perform the event for that key.
If the touch X & Y is inside either 3, 4 or 5 the perform the event for that key.
If the touch X & Y is inside either 6 then perform the event for that key.
If the touch X & Y is inside either 7 then perform the event for that key.
You will of course do that for all the keys.
So thereby when a collision happens you look through all that.
Simple Rectangle vs Point Collision Detection
The following code checks for collision detection between a rectangle and a point. If the point is within the rectangles bounds then the method will return true. If not it returns false.
public static boolean containsRectanglePoint(double x, double y, double w, double h, double px, double py)
{
if (px < x) { return false; }
if (py < y) { return false; }
if (px > (x + w)) { return false; }
if (py > (y + h)) { return false; }
return true;
}
x = Rectangle X (or AABB Minimum X)
y = Rectangle Y (or AABB Minimum Y)
w = Rectangle Width (or AABB Maximum X - AABB Minimum X)
h = Rectangle Height (or AABB Maximum Y - AABB Minimum Y)
px = Point X
py = Point Y
In your case px & py is the location of the touch.
You could also use Java's standard Rectangle2D class, to both store and calculate the collisions, but that requires creating a lot of instances of the class, and it will be a lot cheaper, when we are talking about memory, to just store the coordinates and then use the function I provided to you.
Related
I want to pinch zoom around a specific coordinate on a tiled 15f x 15f 3D board. I.e. I don't want to zoom around origin. Thus I need to pan the board accordingly.
I am using a PerspectiveCamera (near=0.1f, far=100f). For a perfect fit of the screen the camera is located at approx.z=13.4 before zooming.
Now what (I think) I want to do is to:
Unproject the screen coordinates (GestureDetector.pinch method) done once for each pinch zoom:
float icx = (initialPointer1.x + initialPointer2.x) / 2;
float icy = (initialPointer1.y + initialPointer2.y) / 2;
pointToMaintain = new Vector3(icx, icy, 0f);
mCamera.unproject(pointToMaintain);
Now for each zoom cycle (as I adjust the mCamera.z accordingly and do mCamera.update()) I project the point back to screen coordinates:
Vector3 pointNewPos = new Vector3(pointToMaintain);
mCamera.project(pointNewPos);
Then calculate the delta and pan accordingly:
int dX = (int) (pointNewPos.x - icx);
int dY = (int) (pointNewPos.y - icy);
pan(...); /* I.e. mCamera.translate(...) */
My problem is that the mCamera.z is initially above pointToMaintain.z and then goes below as the user moves the fingers:
cam.z ptm.z dX dY
0 13.40
1 13.32 13.30 12 134
2 13.24 13.30 12 -188
...
(0) is the original value of mCamera.z before zooming starts. (1) is not not valid? However (2) should be OK.
My questions are:
(1) How can I get a "valid" pointToMaintain when unprojecting the screen coordinates on the camera. I.e. a point that is not less than cam.z. (The reason I get the point to 13.30 is (I guess) because near=0.1f. But as seen above this results in (weird?) screen coordinates).
(2) Is this a good strategy for moving the tiles board closer to the coordinates the user pinched zoomed on?
To mantain focus points, I did this code:
Obs: This code relies on overloaded operators, you need to change the vectors operators by its method (addMe, subtract, etc)
void zoomAt(float changeAmmount, Vector2D focus) {
float newZoom = thisZoom + changeAmmount;
offset = focus - ((focus - offset) * newZoom / thisZoom);
thisZoom = newZoom;
}
Where
focus = Current center point to mantain
offset = Distance from 0,0
thisZoom = current zoom ammount; (starts at 1)
changeAmmount = value to increase or decrease zoom
It took me 4 tries along of 3 years to make it done, and was pretty easy when you drawn it down, its just two triangles.
Is there a way to check if I touched the object on the screen ? As I understand the HitResult class allows me to check if I touched the recognized and maped surface. But I want to check this I touched the object that is set on that surface.
ARCore doesn't really have a concept of an object, so we can't directly provide that. I suggest looking at ray-sphere tests for a starting point.
However, I can help with getting the ray itself (to be added to HelloArActivity):
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
If you're calling this several times per frame see the comment about the getProjectionMatrix and invertM calls.
Apart from Mouse Picking with Ray Casting, cf. Ian's answer, the other commonly used technique is a picking buffer, explained in detail (with C++ code) here
The trick behind 3D picking is very simple. We will attach a running
index to each triangle and have the FS output the index of the
triangle that the pixel belongs to. The end result is that we get a
"color" buffer that doesn't really contain colors. Instead, for each
pixel which is covered by some primitive we get the index of this
primitive. When the mouse is clicked on the window we will read back
that index (according to the location of the mouse) and render the
select triangle red. By combining a depth buffer in the process we
guarantee that when several primitives are overlapping the same pixel
we get the index of the top-most primitive (closest to the camera).
So in a nutshell:
Every object's draw method needs an ongoing index and a boolean for whether this draw renders the pixel buffer or not.
The render method converts the index into a grayscale color and the scene is rendered
After the whole rendering is done, retrieve the pixel color at the touch position GL11.glReadPixels(x, y, /*the x and y of the pixel you want the colour of*/). Then translate the color back to an index and the index back to an object. VoilĂ , you have your clicked object.
To be fair, for a mobile usecase you should probably read a 10x10 rectangle, iterate trough it and pick the first found non-background color - because touches are never that precise.
This approach works independently of the complexity of your objects
i'm making a game with accelerometer feature, so that each time i turn my device to the left, the ship will bank left, and vice versa
the problem is that the ship keep moving left by itself
here's my code
public void onSensorChanged(SensorEvent event){
if(event.sensor.getType()==Sensor.TYPE_ACCELEROMETER){
float x = event.values[0];
deltaX = xBefore-x;
xBefore = x;
if(deltaX>0){//move right
SFEngine.playerFlightAction = SFEngine.PLAYER_LEFT_BANK_1;
}else{//move left
SFEngine.playerFlightAction =SFEngine.PLAYER_RIGHT_BANK_1;
}
}
}
You should not use any delta. Just check if x > 0, then move it to the left, else to the right
deltaX is not calculated well in your code. It does not consider that xBefore can be less than zero and current x is greater than zero (or other way round). Delta should be calculated using absolute value.
However I suppose that you really don't need any delta here. Just use x.
so I'm just starting to learn how to create live wallpapers in eclipse and I'm having trouble getting a simple line to move randomly across the screen after a random amount of time, sort of like a shooting star. I think my stop and start is wrong also... I was trying to set a length limit for the line...
I'm using the CubeLiveWallpaper as a template
/*
* Draw a line
*/
void drawCube(Canvas c) {
c.save();
c.drawColor(0xff000000);
drawLine(c);
c.restore();
}
/*
* Line path
*/
void drawLine(Canvas c) {
// Move line across screen randomly
//
float startX = 0;
float startY = 0;
float stopX = 100;
float stopY = 100;
c.drawLine(startX, startY, stopX, stopY, mPaint);
}
This is a pretty open-ended question. I'll try to give you some pointers. :-)
First of all, with all due respect to our good buddies at Google, the Cube example does not always present "best practice." Most notably, you should "never" use hard-coded constants in your wallpaper...always use a proportion of your screen size. In most cases, it's "good enough" to save the width and height variables from onSurfaceChanged() into class variables. My point is, instead of "100," you should be using things like "mScreenWidth / 4" to indicate one quarter of the width of your device (be it teeny tiny phone or ginormous tablet).
To get random numbers, you can use http://developer.android.com/reference/java/util/Random.html
As for the animation itself, well, you can randomize the rate by randomizing the delay you use to reschedule your runnable in postDelayed().
By now, you're probably wondering about the "tricky" part...drawing the line itself. :-) I suggest starting with something very simple, and adding complexity as you eyeball things. Let's say, fr'instance you generate random start and finish points, so that your final stroke will be
c.drawLine(startX, startY, stopX, stopY, mPaint);
Presumably, you will want to draw a straight line, which means maintaining a constant slope. You could set up a floating point "percentage" variable, initialized to zero, and each time through the runnable, increment it by a random amount, so that at each pass it indicates the "percentage" of the line you wish to draw. So each call in your runnable would look like
c.drawLine(startX, startY, startX + percentage * deltaX, startY + percentage * deltaX * slope, mPaint);
(where deltaX = stopX - startX)
Obviously, you want to stop when you hit 100 percent.
This is really just a start. You can get as heavy-duty with your animation as you wish (easing, etc.), for instance using a library like this one: http://code.google.com/p/java-universal-tween-engine/
Another option, depending on the effect you're trying to achieve, would be to work with a game engine, like AndEngine. Again, pretty heavy duty. :-)
http://code.google.com/p/andenginelivewallpaperextensionexample/source/browse/
Good luck!
I am developing an application which uses OpenGL for rendering of the images.
Now I just want to determine the touch event on the opengl sphere object which I have drwn.
Here i draw 4 object on the screen. now how should I come to know that which object has been
touched. I have used onTouchEvent() method. But It gives me only x & y co-ordinates but my
object is drawn in 3D.
please help since I am new to OpenGL.
Best Regards,
~Anup
t Google IO there was a session on how OpenGL was used for Google Body on Android. The selecting of body parts was done by rendering each of them with a solid color into a hidden buffer, then based on the color that was on the touch x,y the corresponding object could be found. For performance purposes, only a small cropped area of 20x20 pixels around the touch point was rendered that way.
Both approach (1. hidden color buffer and 2. intersection test) has its own merit.
1. Hidden color buffer: pixel read-out is a very slow operation.
Certainly an overkill for a simple ray-sphere intersection test.
Ray-sphere intersection test: this is not that difficult.
Here is a simplified version of an implementation in Ogre3d.
std::pair<bool, m_real> Ray::intersects(const Sphere& sphere) const
{
const Ray& ray=*this;
const vector3& raydir = ray.direction();
// Adjust ray origin relative to sphere center
const vector3& rayorig = ray.origin() - sphere.center;
m_real radius = sphere.radius;
// Mmm, quadratics
// Build coeffs which can be used with std quadratic solver
// ie t = (-b +/- sqrt(b*b + 4ac)) / 2a
m_real a = raydir%raydir;
m_real b = 2 * rayorig%raydir;
m_real c = rayorig%rayorig - radius*radius;
// Calc determinant
m_real d = (b*b) - (4 * a * c);
if (d < 0)
{
// No intersection
return std::pair<bool, m_real>(false, 0);
}
else
{
// BTW, if d=0 there is one intersection, if d > 0 there are 2
// But we only want the closest one, so that's ok, just use the
// '-' version of the solver
m_real t = ( -b - sqrt(d) ) / (2 * a);
if (t < 0)
t = ( -b + sqrt(d) ) / (2 * a);
return std::pair<bool, m_real>(true, t);
}
}
Probably, a ray that corresponds to cursor position also needs to be calculated. Again you can refer to Ogre3d's source code: search for getCameraToViewportRay. Basically, you need the view and projection matrix to calculate a Ray (a 3D position and a 3D direction) from 2D position.
In my project, the solution I chose was:
Unproject your 2D screen coordinates to a virtual 3D line going through your scene.
Detect possible intersections of that line and your scene objects.
This is quite a complex tast.
I have only done this in Direct3D rather than OpenGL ES, but these are the steps:
Find your modelview and projection matrices. It seems that OpenGL ES has removed the ability to retrieve the matrices set by gluProject() etc. But you can use android.opengl.Matrix member functions to create these matrices instead, then set with glLoadMatrix().
Call gluUnproject() twice, once with winZ=0, then with winZ=1. Pass the matrices you calculated earlier.
This will output a 3d position from each call. This pair of positions define a ray in OpenGL "world space".
Perform a ray - sphere intersection test on each of your spheres in order. (Closest to camera first, otherwise you may select a sphere that is hidden behind another.) If you detect an intersection, you've touched the sphere.
for find touch point is inside circle or not..
public boolean checkInsideCircle(float x,float y, float centerX,float centerY, float Radius)
{
if(((x - centerX)*(x - centerX))+((y - centerY)*(y - centerY)) < (Radius*Radius))
return true;
else
return false;
}
where
1) centerX,centerY are center point of circle.
2) Radius is radius of circle.
3) x,y point of touch..