OpenGL ES Texture Mapping Overflow - android

I want to animate a 2d Sprite sheet. I hava a sprite sheet with a lot of character animation with different frame size. For a single animation, I scale a vertex to fit one frame and then change Texture position for animation. Works pretty well for one animation, but when switching to another animation with different frame size and scale vertex and fitting texture again, I get side effect where texture is stretcht and not fitting, it is just on one animation frame, but make the change between two animations look very bad.
I think, that is because of the vertex-size change. So my idea ist, to have a fixed vertex size and fit the texture without strechting it to the full vertex (height for every animation is fixed).
Maybe a image will help, so I created one:
Here is my code, hope it is enough:
public boolean nextFrame() {
float textureWidth = textureMap()[currentAnimation][0];
float frameCount = textureMap()[currentAnimation][1];
float frameWidth = textureWidth / frameCount;
if (loop) {
if (currentFrame == frameCount)
currentFrame = 0;
} else {
if (currentFrame == frameCount) {
setAnimation(AnimationConstants.IDLE);
loop = true;
return false;
}
}
float x_left = (float) currentFrame * frameWidth / textureWidth;
float x_right = (float) (currentFrame * frameWidth + frameWidth)
/ textureWidth;
texture[0] = x_left; // top left x
texture[1] = 1.0f; // top left y
texture[2] = x_left; // bottom left x
texture[3] = 0.0f; // bottom left y
texture[4] = x_right; // top right x
texture[5] = 1.0f; // top right y
texture[6] = x_right; // bottom right x
texture[7] = 0.0f; // bottom right y
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
currentFrame++;
return true;
}
private void newVertex() {
float textureWidth = textureMap()[currentAnimation][0];
float frameCount = textureMap()[currentAnimation][1];
float frameWidth = textureWidth / frameCount;
float width = (float) frameWidth / (float) frameHeight;
vertices[0] = pos_x; // bottom left x
vertices[1] = pos_y; // bottom left y
vertices[3] = pos_x; // top left x
vertices[4] = pos_y + (1.0f * scale); // top left y
vertices[6] = pos_x + (width * scale); // bottom right x
vertices[7] = pos_y; // bottom right y
vertices[9] = pos_x + (width * scale); // top right x
vertices[10] = pos_y + (1.0f * scale); // top right y
// z values
vertices[2] = -0.2f; // bottom left z
vertices[5] = -0.2f; // top left z
vertices[8] = -0.2f; // bottom right z
vertices[11] = -0.2f; // top right z
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuffer.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
}
so for every new animation, I call newVertex().

Check this out.
I suppose you should use the same general idea. If needed I can describe in detail how to place texture correctly in your case.

Related

Calculate position of an Image in Absolutelayout when zooming

I have a Page which contains an AbsoluteLayout with a PinchZoomContainer (like the one from microsoft docs). In this Container is an image that fills the entire screen. When the user tapes on the image, another image (a pin) is positioned at the tappostion. So far this works. My problem is, that i couldn't figure out, how to calculate the position of the added image (user tapped) when the user zooms in and out.
I want the added image to stay at the tapped position, no matter if the user zooms in or out.
When i put the image in a grid the calculation gets done automatically.
Is there a better way to achieve that? I've chosen AbsoluteLayout because i need to put the image at the tapped position.
The following code is used for zooming. Here I can't figure out how to do the calculation for the added image. The image gets added to the AbsoluteLayout on runtime.
Normal scale
Zoomed in
void OnPinchUpdated(object sender, PinchGestureUpdatedEventArgs e)
{
if (e.Status == GestureStatus.Started)
{
// Store the current scale factor applied to the wrapped user interface element,
// and zero the components for the center point of the translate transform.
startScale = Content.Scale;
Content.AnchorX = 0;
Content.AnchorY = 0;
startTransX = pin.TranslationX;
startTranxY = pin.TranslationY;
}
if (e.Status == GestureStatus.Running)
{
// Calculate the scale factor to be applied.
currentScale = currentScale + (e.Scale - 1) * startScale;
currentScale = Math.Max(1, currentScale);
// The ScaleOrigin is in relative coordinates to the wrapped user interface element,
// so get the X pixel coordinate.
double renderedX = Content.X + xOffset;
double deltaX = renderedX / Width;
double deltaWidth = Width / (Content.Width * startScale);
double originX = (e.ScaleOrigin.X - deltaX) * deltaWidth;
// The ScaleOrigin is in relative coordinates to the wrapped user interface element,
// so get the Y pixel coordinate.
double renderedY = Content.Y + yOffset;
double deltaY = renderedY / Height;
double deltaHeight = Height / (Content.Height * startScale);
double originY = (e.ScaleOrigin.Y - deltaY) * deltaHeight;
// Calculate the transformed element pixel coordinates.
double targetX = xOffset - (originX * Content.Width) * (currentScale - startScale);
double targetY = yOffset - (originY * Content.Height) * (currentScale - startScale);
// Apply translation based on the change in origin.
// needs to use Xamarin.Forms.Internals or Extension
Content.TranslationX = targetX.Clamp(-Content.Width * (currentScale - 1), 0);
Content.TranslationY = targetY.Clamp(-Content.Width * (currentScale - 1), 0);
x = Content.TranslationX;
y = Content.TranslationY;
// Apply scale factor
Content.Scale = currentScale;
}
if (e.Status == GestureStatus.Completed)
{
// Store the translation delta's of the wrapped user interface element.
xOffset = Content.TranslationX;
yOffset = Content.TranslationY;
x = Content.TranslationX;
y = Content.TranslationY;
}
}

Rendering Bitmap based on Views position in parent

I'm trying to make a simple image editor. At the beginning I've thought that it'll be a good idea to simply save view state as Bitmap but, as it turned out, there is a wide range of screen resolutions and that leads to huge quality (and memory usage) fluctuations.
Now I'm trying to make a module that renders views state translated to desired resolution.
In the code below I'm trying to recreate current state of the views in canvas:
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.id.test_1_1);
bitmap = Bitmap.createScaledBitmap(bitmap, parentView.getMeasuredWidth(), parentView.getMeasuredHeight(), true);
Canvas canvas = new Canvas(bitmap);
Paint paint = new Paint();
for (View rootView : addedViews) {
ImageView imageView = rootView.findViewById(R.id.sticker);
float[] viewPosition = new float[2];
transformToAncestor(viewPosition, parentView, imageView);
Bitmap originalBitmap = ((BitmapDrawable) imageView.getDrawable()).getBitmap();
Matrix adjustMatrix = new Matrix();
adjustMatrix.postTranslate(viewPosition[0], viewPosition[1]);
adjustMatrix.postScale(
rootView.getScaleX(),
rootView.getScaleY(),
rootView.getWidth() / 2,
rootView.getHeight() / 2);
adjustMatrix.postRotate(rootView.getRotation(),
rootView.getWidth() / 2,
rootView.getHeight() / 2);
canvas.drawBitmap(originalBitmap, adjustMatrix, paint);
}
transformToAncestor function is from here.
public static void transformToAncestor(float[] point, final View ancestor, final View descendant) {
final float scrollX = descendant.getScrollX();
final float scrollY = descendant.getScrollY();
final float left = descendant.getLeft();
final float top = descendant.getTop();
final float px = descendant.getPivotX();
final float py = descendant.getPivotY();
final float tx = descendant.getTranslationX();
final float ty = descendant.getTranslationY();
final float sx = descendant.getScaleX();
final float sy = descendant.getScaleY();
point[0] = left + px + (point[0] - px) * sx + tx - scrollX;
point[1] = top + py + (point[1] - py) * sy + ty - scrollY;
ViewParent parent = descendant.getParent();
if (descendant != ancestor && parent != ancestor && parent instanceof View) {
transformToAncestor(point, ancestor, (View) parent);
}
}
(author wrote a note that his function does not support rotation, but there's not much rotation in my example so I don't think that important for now).
My problem is:
First image is generated via saving the parent view state. Second one is generated by translating views position, rotation and scale onto canvas.
As you can see, on the canvas, not scaled stickers are positioned properly, but scaled are incorrectly positioned.
How to position those scaled views properly?
I've managed to fix the issue myself.
It turned out my solution was nearly OK but I did not took into consideration that my manipulation of a matrix does change the arrangement of the original points, so my
rootView.getWidth() / 2,
rootView.getHeight() / 2
is no longer applicable as a center of the view after calling Matrix.postScale or Matrix.postRotation.
I wanted to:
apply scale with pivot on top left corner,
apply rotation with pivot on the center of the view.
Given the assumptions, here's the working code:
// setup variables for sizing and transformation
float position[] = new float[2];
transformToAncestor(position, rootView, imageView);
float desiredRotation = imageView.getRotation();
float sizeDeltaX = imageView.getMeasuredWidth() / (float) imageBitmap.getWidth();
float sizeDeltaY = imageView.getMeasuredHeight() / (float) imageBitmap.getHeight();
float desiredScaleX = imageView.getScaleX() * sizeDeltaX * scaleX;
float desiredScaleY = imageView.getScaleY() * sizeDeltaY * scaleY;
float imageViewWidth = imageView.getMeasuredWidth() * imageView.getScaleX();
float imageViewHeight = imageView.getMeasuredHeight() * imageView.getScaleY();
float rootViewWidth = rootView.getMeasuredWidth();
float rootViewHeight = rootView.getMeasuredHeight();
float percentXPos = position[0] / rootViewWidth;
float percentYPos = position[1] / rootViewHeight;
float percentXCenterPos = (position[0] + imageViewWidth/2)
/ rootViewWidth;
float percentYCenterPos = (position[1] + imageViewHeight/2)
/ rootViewHeight;
float desiredPositionX = background.getWidth() * percentXPos;
float desiredPositionY = background.getHeight() * percentYPos;
float desiredCenterX = background.getWidth() * percentXCenterPos;
float desiredCenterY = background.getHeight() * percentYCenterPos;
// apply above variables to matrix
Matrix matrix = new Matrix();
float[] points = new float[2];
matrix.postTranslate(
desiredPositionX,
desiredPositionY);
matrix.mapPoints(points);
matrix.postScale(
desiredScaleX,
desiredScaleY,
points[0],
points[1]);
matrix.postRotate(
desiredRotation,
desiredCenterX,
desiredCenterY);
// apply matrix to bitmap, then draw it on canvas
canvas.drawBitmap(imageBitmap, matrix, paint);
As you can see, the mapPoints method was the answer for my question - it simply returns points after tranformation.

How to zoom at particular XY coordinate(points) in ImageView

I have used this PhotoView library for custom ImageView. I want to scale the image at particular point. Here is the method I found is setScale(float scale, float focalX, float focalY, boolean animate)
I am wondering what can I pass a value of focalX and focalY , I have X and Y coordinate which I am passing currently and it scales to very different position.
Here is a snippet,
intResultX = intTotalX / intArraySize;
intResultY = intTotalY / intArraySize;
mMap.setScale(5, intResultX, intResultY, true);
To zoom at particular XY coordinate in Imageview you can pass a value of focalX and focalY along with scale (must be between max scale an min scale of PhotoView) and boolean value to set animation.
Code to get max-min scales:
mPhotoView.getMinimumScale();
mPhotoView.getMaximumScale();
focalX and focalY It can be any points on screen, here I have taken two examples one is center of the screen and other is top-left corner. following is the code for both cases.
Code:
Random r = new Random();
float minScale = mPhotoView.getMinimumScale();
float maxScale = mPhotoView.getMaximumScale();
float randomScale = minScale + (r.nextFloat() * (maxScale - minScale));
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
int height = displayMetrics.heightPixels;
int width = displayMetrics.widthPixels;
int centerX=width/2;
int centerY =height/2;
/*pass a value of focalX and focalY to scale image to center*/
//mPhotoView.setScale(randomScale, centerX, centerY, true);
/*pass a value of focalX and focalY to scale image to top left corner*/
mPhotoView.setScale(randomScale, 0, 0, true);
Set zoom to the specified scale. Image will be centered around the point
(focusX, focusY). These floats range from 0 to 1 and denote the focus point
as a fraction from the left and top of the view. For example, the top left
corner of the image would be (0, 0). And the bottom right corner would be (1, 1).
public void setZoom(float scale, float focusX, float focusY, ScaleType scaleType) {
/*setZoom can be called before the image is on the screen, but at this point,
image and view sizes have not yet been calculated in onMeasure. Thus, we should
delay calling setZoom until the view has been measured.*/
if (!onDrawReady) {
delayedZoomVariables = new ZoomVariables(scale, focusX, focusY, scaleType);
return;
}
if (scaleType != mScaleType) {
setScaleType(scaleType);
}
resetZoom();
scaleImage(scale, viewWidth / 2, viewHeight / 2, true);
matrix.getValues(m);
m[Matrix.MTRANS_X] = -((focusX * getImageWidth()) - (viewWidth * 0.5f));
m[Matrix.MTRANS_Y] = -((focusY * getImageHeight()) - (viewHeight * 0.5f));
matrix.setValues(m);
fixTrans();
setImageMatrix(matrix);
}
Hope this helps. Happy coding.

Android Native GLES 2.0 Blank Screen With MVP Matrix

I'm drawing a simple quad using gles2.0 in c++. If i draw it with a basic vertex shader there is no problem and this is the result
uniform mat4 u_mvp;
attribute vec4 a_Position;
void main(){
gl_Position = a_Position;
}
if i add MVP matrix
gl_Position = u_mvp * a_Position;
then there's nothing on screen.
This is the perspective matrix:
Matrix4 Matrix4::getPerspective(float angle, float ratio, float near, float far) {
// angle = 45; ratio = 1.438; near = 1; far = 100
Matrix4 matrix;
float top = (float)(near * Math::tangentDegrees(angle / 2.0f));
float bottom = -top;
float right = top * ratio;
float left = -right;
matrix.m[0] = (2 * near) / (right - left);
matrix.m[1] = 0.0f;
matrix.m[2] = (right + left) / (right - left);
matrix.m[3] = 0.0f;
matrix.m[4] = 0.0f;
matrix.m[5] = (2 * near) / (top - bottom);
matrix.m[6] = (top + bottom) / (top - bottom);
matrix.m[7] = 0.0f;
matrix.m[8] = 0.0f;
matrix.m[9] = 0.0f;
matrix.m[10] = -(far + near) / (far - near);
matrix.m[11] = -(2 * far * near) / (far - near);
matrix.m[12] = 0.0f;
matrix.m[13] = 0.0f;
matrix.m[14] = -1.0f;
matrix.m[15] = 0.0f;
return matrix;
}
The modelView matrix is obtained as follow:
modelView = translation * rotation * scale
mvp = perspective * modelView
Each element of this multiplication is obtaied following this example
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
and all of them are initalized at the identity matrix.
With this command i obtain the handle
transformUniformHandle = (GLuint)glGetUniformLocation(getProgramId(), "u_mvp");
With this command i set the mvp matrix in the shader (tried both GL_FALSE, GL_TRUE)
glUniformMatrix4fv(transformUniformHandle, 1, GL_TRUE, modelViewProjection.m);
OpenGL uses column-major storage for matrices, while your projection matrix is set up in row-major order. So you need to transpose the order of the matrix elements:
matrix.m[0] = (2 * near) / (right - left);
matrix.m[4] = 0.0f;
matrix.m[8] = (right + left) / (right - left);
matrix.m[12] = 0.0f;
matrix.m[1] = 0.0f;
matrix.m[5] = (2 * near) / (top - bottom);
matrix.m[9] = (top + bottom) / (top - bottom);
matrix.m[13] = 0.0f;
matrix.m[2] = 0.0f;
matrix.m[6] = 0.0f;
matrix.m[10] = -(far + near) / (far - near);
matrix.m[14] = -(2 * far * near) / (far - near);
matrix.m[3] = 0.0f;
matrix.m[7] = 0.0f;
matrix.m[11] = -1.0f;
matrix.m[15] = 0.0f;
You tried to transpose the matrix by passing GL_TRUE as the 3rd argument here:
glUniformMatrix4fv(transformUniformHandle, 1, GL_TRUE, modelViewProjection.m);
This is not supported in ES 2.0, the only valid argument value is GL_FALSE. You will see a GL_INVALID_VALUE error when you call glGetError() after this. Calling glGetError() should be routine if you have any problems with your OpenGL code.
You could keep the matrix row-major if you really wanted to, and change the shader code accordingly by multiplying the vector from the right:
gl_Position = a_Position * u_mvp;
But it's probably better to just use column-major order for your matrices.
Also, you're not showing your model-view matrix. You'll have to make sure that the view transformation translates the geometry in the negative z-direction, so that it's placed between the near and far planes of the projection transformation your are using.
Given you haven't shown your whole code, I can't be sure, but it could be that you haven't called glUseProgram before calling glUniformMatrix4fv. It might also be that your triangles are clipped out by the projection matrix. Make sure the points are within the clipping ranges. If that doesn't fix it you should get errors from glGetError. This page will explain its use: OpenGL error checking

android opengl check visibility of a point with camera zoom

I m woring on an android opengl 1.1 2d game with a top view on a vehicule and a camera zoom relative to the vehicule speed. When the speed increases the camera zoom out to offer the player a best road visibility.
I have litte trouble finding the exact way to detect if a sprite is visible or not regarding his position and the current camera zoom.
Important precision, all of my game's objects are on the same z coord. I use 3d just for camera effect. (that's why I do not need frustrum complicated calculations)
here is a sample of my GLSurfaceView.Renderer class
public static float fov_degrees = 45f;
public static float fov_radians = fov_degrees / 180 * (float) Math.PI;
public static float aspect; //1.15572 on my device
public static float camZ; //927 on my device
#Override
public void onSurfaceChanged(GL10 gl, int x, int y) {
aspect = (float) x / (float) y;
camZ = y / 2 / (float) Math.tan(fov_radians / 2);
Camera.MINIDECAL = y / 4; // minimum cam zoom out (192 on my device)
if (x == 0) { // Prevent A Divide By Zero By
x = 1; // Making Height Equal One
}
gl.glViewport(0, 0, x, y); // Reset The Current Viewport
gl.glMatrixMode(GL10.GL_PROJECTION); // Select The Projection Matrix
gl.glLoadIdentity(); // Reset The Projection Matrix
// Calculate The Aspect Ratio Of The Window
GLU.gluPerspective(gl, fov_degrees, aspect , camZ / 10, camZ * 10);
GLU.gluLookAt(gl, 0, 0, camZ, 0, 0, 0, 0, 1, 0); // move camera back
gl.glMatrixMode(GL10.GL_MODELVIEW); // Select The Modelview Matrix
gl.glLoadIdentity(); // Reset The Modelview Matrix
when I draw any camera relative object I use this translation method :
gl.glTranslatef(position.x - camera.centerPosition.x , position.y -camera.centerPosition.y , - camera.zDecal);
Eveything is displayed fine, the problem comes from my physic thread when he checks if an object is visible or not:
public static boolean isElementVisible(Element element) {
xDelta = (float) ((camera.zDecal + GameRenderer.camZ) * GameRenderer.aspect * Math.atan(GameRenderer.fov_radians));
yDelta = (float) ((camera.zDecal + GameRenderer.camZ)* Math.atan(GameRenderer.fov_radians));
//(xDelta and yDelta are in reallity updated only ones a frame or when camera zoom change)
Camera camera = ObjectRegistry.INSTANCE.camera;
float xMin = camera.centerPosition.x - xDelta/2;
float xMax = camera.centerPosition.x + xDelta/2;
float yMin = camera.centerPosition.y - yDelta/2;
float yMax = camera.centerPosition.y + yDelta/2;
//xMin and yMin are supposed to be the lower bounds x and y of the visible plan
// same for yMax and xMax
// then i just check that my sprite is visible on this rectangle.
Vector2 phD = element.getDimToTestIfVisibleOnScreen();
int sizeXd2 = (int) phD.x / 2;
int sizeYd2 = (int) phD.y / 2;
return (element.position.x + sizeXd2 > xMin)
&& (element.position.x - sizeXd2 < xMax)
&& (element.position.y - sizeYd2 < yMax)
&& (element.position.y + sizeYd2 > yMin);
}
Unfortunately the object were disapearing too soon and appearing to late so i manuelly added some zoom out on the camera for test purpose.
I did some manual test and found that by adding approx 260 to the camera z index while calculating xDelta and yDelta it, was good.
So the line is now :
xDelta = (float) ((camera.zDecal + GameRenderer.camZ + 260) * GameRenderer.aspect * Math.atan(GameRenderer.fov_radians));
yDelta = (float) ((camera.zDecal + GameRenderer.camZ + 260)* Math.atan(GameRenderer.fov_radians));
Because it's a hack and the magic number may not work on every device I would like to understand what i missed. I guess there is something in that "260" magic number that comes from the fov or ration width/height and that could be set as a formula parameter for pixel perfect detection.
Any guess ?
My guess is that you should be using Math.tan(GameRenderer.fov_radians) instead of Math.atan(GameRenderer.fov_radians).
Reasoning:
If you used a camera with 90 degree fov, then xDelta and yDelta should be infinitely large, right? Since the camera would have to view the entire infinite plane.
tan(pi/2) is infinite (and negative infinity). atan(pi/2) is merely 1.00388...
tan(pi/4) is 1, compared to atan(pi/4) of 0.66577...

Categories

Resources