I need to draw a gridded image using opengl. I've read that images created using grids allows to do various effects on images, like the famous wave effect, or a ripple effect from this link:
http://www.soulstorm-creations.com/index.php?option=com_content&view=article&id=111:opengl-making-a-2d-grid-image&catid=18:programming-articles&Itemid=39
I've also gone through lesson 6 android port from NEHE Tutorials:
http://insanitydesign.com/wp/projects/nehe-android-ports/
I can convert it from cube to rectangle, but I need help in understanding
1) why we are using vertex coordinates in terms of 0 and 1? Why have they not used coordinates according to image width and height?
2) How can we divide the texture region in small grids as explained in tutorial above? If some one can guide on 1), I guess I can work on point 2).
Any help would be really appreciated.
The vertex coordinates are from 0 to 1 so that you can use vertex data with many different textures without worrying about the dimensions of the image.
That said, for pixel perfect operations you often have to often the texture coordinates by a fraction the image's pixel width (say 0.5f * (float) image->width()) and height in order to make sure OpenGl (or d3D) samples from the correct place.
As for dividing the grid, straight forward simple linear interpolation. If you have a grid going from pixel coordinates 0 to 100 and you want 10 steps in your grid, you start at 0 and increment in steps of 10 pixels :
vertex_xi = (start_x + ((end_x - start_x) / 10) * i));
vertex_yi = (start_y + ((end_y - start_y) / 10) * i));
similarly,, for texture coordinates, you'd do the same thing only you usually name them like this:
vertex_ui = (start_u + ((end_u - start_u) / 10) * i));
vertex_vi = (start_v + ((end_v - start_v) / 10) * i));
where 'start_u' and 'start_v' are '1.0f +/- offset and end 'end_u' and 'end_v' are '1.0f +/- offset'. Put those in your vertex array and you should be good to go.
HTH.
Related
I need a basic idea for how can i warp image on touch of a particular area. Image filters apply warp on whole image but i want to warp single point, like if i want to warp eye of a person then i will touch on that point. So I need a basic idea about this work.
I have tried this one but its also applies filters on whole image.
https://github.com/Jtfinlay/PhotoWarp
App:
https://play.google.com/store/apps/details?id=hu.tonuzaba.android&hl=en
A warp is not just at a "single point" but over some area that you deform in a smooth way.
To achieve this, you need a geometric transform of the coordinates that works in some neighborhood of the touched point. One way to do this is by applying a square grid on the image and moving the grid nodes around the touched points with some law of yours (for instance, apply a displacement vector to all nodes, with a decaying factor such that far away nodes don't move).
Then you need a resampling function that computes the new coordinates of every pixel and copies the color of the source pixel.
For good results, you must actually work in reverse: scan the destination image and for every pixel retrieve the source coordinates and source pixels. Apply bilinear or bicubic resampling to avoid aliasing.
For ease of implementation, the gridding idea should be adapted as well: rather than deforming the destination grid, keep it unchanged and apply the inverse deformation to the source grid.
Last thing: in the grid approach, see the displacements of the grid nodes as two scalar functions DX(i, j) and DY(i, j) that you can handle separately. From the knowledge of the displacements at the nodes, you can estimate the displacement of any pixel by interpolation (bicubic would be appropriate here).
you can use canvas to detect that portion and stop action on that portion in ontouchlistener
code sample
Bitmap pricetagBmp = BitmapFactory.decodeResource(getActivity().getResources(), R.drawable.ic_tag_circle_24dp);
// canvas.drawBitmap(pricetagBmp,left + (right - left) / 2, top + (bottom - top) / 2 - (bounds.height() / 2),circlePaint);
float imageStartX = (left + ((right-left)/2)) - (pricetagBmp.getWidth()/2);
float imageStartY = (top + ((bottom - top) / 2)) - (pricetagBmp.getHeight()/2);
canvas.drawBitmap(pricetagBmp, imageStartX, imageStartY,circlePaint);
and in ontouchlistener if that points detected you can perform no action
Note: you can replace drawBitmap with drawRect or something else with invisible color
I'd like to project images on a wall using camera. Images, essentially, must scale regarding the distance between camera and the wall.
Firstly, I made distance calculations by using right triangle trigonometry(visionHeight * Math.tan(a)). It's not 100% exact but yet close to real values.
Secondly, knowing the distance we can try to figure out all panorama height by using isosceles triangle trigonometry formula: c = a * tan(A);
A = mCamera.getParameters().getVerticalViewAngle();
The results are about 30% greater than the actual object height and it's kinda weird.
double panoramaHeight = (distance * Math.tan( mCamera.getParameters().getVerticalViewAngle() / 2 * 0.0174532925)) * 2;
I've also tried figuring out those angles using the same isosceles triangle's formula, but now knowing the distance and the height. I got 28 and 48 degrees angles.
Does it mean that android camera doesn't render everything it shoots ? And, what other solutions you can suggest ?
Web search shows that the values returned by getVerticalViewAngle() cannot be blindly trusted on all devices; also note that you should take into account the zoom level and aspect ratio, see Determine angle of view of smartphone camera
I need to draw circles in my Android application. Actually it is a Pacman game and I need to draw tablets. Since there are many tablets on field I decided to draw each tablet with a single polygon.
Here is the illustration of my idea:
http://www.advancedigital.ru/ideal_circle_with_only_one_polygon.jpg
Vertex coordrs:
// (x, y)
0 : -R, R * (Math.sqrt(2) + 1)
1 : -R, -R
2 : R * (Math.sqrt(2) + 1), -R
Vertex coords are calculated relative to circle center to place circles with ease later.
The problem is in texture mapping, according to my calculations UVs should be like this
0 : 0, -(Math.sqrt(2) + 0.5)
1 : 0, 1
2 : 1, (Math.sqrt(2) + 0.5)
But negative V value causes application to show only black screen. That is why I think that I'm missing something or I'm going the wrong way…
My question is: Is it possible to render the texture in that way or not? If it isn't possible, what is the best way to draw small dots?
P.S: I'm working with OpenGL ES 2.0 on Android.
Seems to me, that this guy is trying to do the same.
The GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T texture parameters are set to GL_REPEAT by default. Set them to GL_CLAMP_TO_EDGE instead to get the effect you're looking for (see the glTexParameter documentation)
I'm new to GL and wanted to create a tiled map as a self tuorial. I want to create a small (maybe 7 hexes wide / tall) hex map. My first thought was to just create a method to draw one hex and then just translate the appropriate offset and place the new hex. But this doesn't seem effcient. Any Idea's? Alos as a side question, how do I determine if a MotionEvent is with in the are of a given hex?
Extensive hex grid information.
To determine if a MotionEvent is within a certain hex you have to convert the coords passed in via the motion event to your OpenGL World coords. Its just like a unit conversion, you know the screen goes from 0 - WIDTH and your GL world lets say goes from -1 to 1.
(xCoord / (Width - 0)) * (1 - (-1)) = xCoordWorld
will give you the xCoord from 0 to 2, then subtract 1 to get it in -1 to 1.
As far as the hexes go I've always used 'art' hexes. Draw the hex out in paint then render a bunch of squares with that piece of art on them, fast and easy to swap a hex out for another hex.
If you have a "ball" inside a 2D polygon, made up of say, 4 line segments that act as bounding walls, how do you calculate the angle of the ball after the collision with the irregularly sloped wall?
I know how to make the ball bounce if the wall is horizontal, vertical, or at a 45 degree angle. I also have my code setup to detect a collision with the wall.
I've read about dot products and normals, but I cannot figure out how to implement these in Java / Android. I'm completely stumped and feel like I've looked up everything 10 pages deep in Google 10 times now. I'm burned out trying to figure this out, I hope someone can help.
Apologies in advance: I don't know the correct Android types. I'm assuming you have a vector type with properties 'x' and 'y'.
If the wall were horizontal and the current velocity were 'vector' then it'd be as easy as:
vector.y = -vector.y;
And you'd leave the x component alone. So you need to do something analogous, but more general.
You do that by substituting the idea of the line normal (a vector perpendicular to the line) for hard coding for the y axis (which is perpendicular to the horizontal).
Since the normal is orthogonal to the line, it can be found by rotating the line by 90 degrees. In 2d, the vector (a, b) can be rotated by 90 degrees by converting it to (-b, a). Hence if you have a line from (x1, y1) to (x2, y2) then you can get the normal with:
vectorAlongLine.x = x2 - x1;
vectorAlongLine.y = y2 - y1;
normal.x = -vectorAlongLine.y;
normal.y = vectorAlongLine.x;
You don't actually care how long the original line was (and it'll affect computations later when you don't want it to), so you want to make the normal be of length 1 irrespective of its current length. You can do that by dividing it by its current length. So, e.g.
lengthOfNormal = Math.sqrt(normal.x*normal.x + normal.y*normal.y);
normal.x /= lengthOfNormal;
normal.y /= lengthOfNormal;
Using the Pythagorean theorem there to get the length.
With the horizontal line, flipping on the y axis was the same as (i) working out what the extent of the vector extends along the y axis; and (ii) subtracting that amount twice — once to get the velocity to be 0 in that direction, again to make it the negative version of the original. That is, it's the same as:
distanceAlongNormal = vector.y;
vector.y -= 2.0 * distanceAlongNormal;
The dot product is used in the general case is to work how far the vector extends along the normal. So it does the same as taking vector.y does for the horizontal line. This is where you possibly have to take a bit of a leap of faith. It's a property of the dot product and you can persuade yourself by inspecting a right-angled triangle. But for now, if you had a horizontal line, you'd have ended up with the normal (0, 1). Since the dot product would be:
vector.x * normal.x + vector.y * normal.y
You'd compute:
distanceAlongNormal = vector.x * 0.0 + vector.y * 1.0;
Which is obviously the same thing as just taking the y component.
Having worked out the distance along the normal, you actually want to then subtract that amount times the normal times two. The only additional step here is multiplying by the normal to get a 2d quantity to subtract. That's because you're looking to subtract in the order of the normal. So complete code, based on a normal computed earlier, is:
distanceAlongNormal = vector.x * normal.x + vector.y * normal.y;
vector.x -= 2.0 * distanceAlongNormal * normal.x;
vector.y -= 2.0 * distanceAlongNormal * normal.y;
If you hadn't made normal of length 1, then you'd need to divide by the length here, since the dot product would scale the distanceAlongNormal value by that amount.
This might come in handy for you
http://www.tonypa.pri.ee/vectors/tut07.html