Calculate pixel offset with Projection and View matrices - android

I am developing an app where the user can draw onto a Bitmap (with MotionEvents and Canvas), which then gets projected onto an OpenGL Surface as a texture. The issue is, I'm having some trouble to correctly offset the pixel where the user touched (Position [x,y] from the top-left corner of the screen, given by MotionEvent), onto the correct place that the Bitmap should be drawn, considering both the position and scaling of the View Matrix (maybe the projection as well). I'm unsure if there's a simple way to mathematically relate screen pixels with GL's Normalized Device Coordinates, which would make things easier.
Example:
Let's say the user touched the top-left corner of the screen (0,0). From view transformations (scaling and translating), however, such corner happens to be exactly at the center of the projected Bitmap. How can I offset this (0,0) position so that it draws in the center of it?
(Also, if what I'm trying to do is extremely impractical, and there's a much easier way, it would be welcome to know).

Related

OpenGL android 2D projection matrix

I am using OpenGLES20 with android and I would like to know how to do the following:
I think it's easier to explain with a picture...
How can I rectify this stretching. Note: I am working in 2D.
I've heard this problem is solved using something called a projection matrix. I have also read a StackOverflow question saying that the android documentation for setting up a projection matrix is not good. I have tried it personally and couldn't get it to work.
This question is extremely poorly put. On the left image you have rectangular view with coordinate system [0,1]x[0,1] and a correctly drawn triangle while on the right you have the same view with view coordinate system and stretched triangle... Taking this 2 things into consideration your triangle coordinates are already stretched to begin with (or there is an extra model view matrix). If they weren't stretched the triangle would be drawn correctly on the right image.
It is a very common issue your scene is stretched when dealing with different view ratios. In general to solve this you are looking for something like glOrtho which can define your coordinate system. The input parameters for this method are left, right, top and bottom and it is easiest to simply use screen coordinates (like presented on the right image). Another approach is to normalise this input to either [0,1]x[0,height/width] or [0,width/height]x[0,1]. These two methods represent "fit" and "fill" and which is which depends on width of view being smaller or larger then height (portrait, landscape).
When using a correct orthographical matrix your square will always be a square without using any additional matrices or multiplying the vertex arrays... In your case it seems you already multiplied your vertices so I suggest you remove that, all of it. If you can not and those vertices will continue to be scaled incorrectly I suggest you use model view matrix to rescale them.

Android Rotating a set of bitmaps around a common pivot point

I am trying to create a game like Jigsaw Puzzle. I am using a class that extends View and in its draw method, I am drawing different bitmap pieces. When user tap a bitmap, I rotates that bitmap by 90 degree angle. Its working perfectly. But user combines some bitmap pieces and then rotate the group, the bitmaps rotate around their center point destroying the group structure.
My question is how to rotate a set of bitmaps around a common pivot point so that when a group of bitmaps rotate, it retains its shape structure?
I am assuming that you have each of the pieces center-weighted which is causing your problem.
On possibility is to have the background be a center weighted object or to look at background and declare the center as the point of rotation.
Then, calculate the approximate box size of each jigsaw piece (this could be dynamic based on if you are using Zoom). to figure out its placement on the screen as bitmap objects in Draw().
Now, imagine a line being draw from the center to the edge of the screen being rotated to get your angle.
So, based on this new information, you would need to calculate your new angle for each center weighted object (jigsaw piece) based off of the angle set by the center of the screen rotation. Each piece would have a different angle of rotation on its axis because of the new line or angle of rotation set by the center of the screen.
This is more of an algorithm/calculation than programming, more specifics on your issue would help.

how to get the absolute translation in a 2D opengl es scene

I have a pannable/zoomable image rendered in a 2d opengl es 1.1. After zooming, how can i get the absolute position from the left side of the image to the left side of my viewable area?
Heres an image to clarify what i mean:
Well I don't know much about IOS, but logically speaking, depending on where the origin is and how your image is being drawn onto the screen, you could just calculate the distance from the x position of the image. For instance, if your origin is on the top left of the screen (0,0), and your image is being drawn from the top left (eg. (-10, 0)), then your distance would simply be the absolute value of the x position of your image (eg. 10 units). However, if your origin was lets say, at the center, then you would have to take the screen size into account and subtract half of the width to achieve the distance. It may sound confusing but it should just be basic math.

Viewport boundaries on any resolution with OpenGL ES

I'm having difficulties understanding about the OpenGL perspective view. I've read tons of information however it didn't help me trying to achieve what I'm after. Which is making sure my 3d scene is filling the entire screen on every Android device.
To test this, I will be drawing a quad in 3d space which in the end should touch every corner, filling up the whole device's screen. I could then use this quad, or actually its coordinates to specify a bounding box at a certain Z distance which I could use to put my geometry and making sure those fill up my screen. When the screen resizes, or I am to run it on another screen resolution, I would recalculate this bounding box and geometry. I'm not talking about static geometry, but for instance say I want to fill the screen with balls and it doesn't matter how big or how many balls there are, the only important thing is the screen is filled and there are no redundant balls outside the visible frustum.
As far as I understand when specifying the viewport you actually bind pixel values to the frustum's boundaries. I know that you can actually set an orthographic view in a way your window pixels match 3d geometry position but I'm not sure how this works in perspective view.
Here I'm assuming the viewport width and height to be mapped to the nearZ. So when a ball is at Z=1f it has it's original size
When moving the ball into the screen so into the direction of farZ, the ball would be scaled down in order for the perspective to work. So a ball at Z=51f for instance, would appear smaller on my screen and I would need more balls to fill up the entire screen.
Now in order to do so, I'm looking for the purple boundaries
Actually I need these boundaries to fill the entire screen on different frustum sizes (width x height) while the frustum angle and Z distance for the balls is always the same
I guess I could use trig to calculate these purple boundaries (see blue triangle note)
Am I correctly marking the frustum angle, it being the vertical angle of the frustum?
Could someone elaborate on the green 1f and -1f values as I seem to have read something about it? Seems like some scalar that is used to resize the geometry within the frustum?
I actually want to be able to programmaticaly position geometry against the viewport borders within 3d space at any resolution/window for any arbitrary Android device.
Note: I have a flash background which uses a stage (visual area) with a known width x height at any resolution which makes it easy to position/scale assets either using absolute measurements or percentual measurements. I guess I'm trying to figure out how this system applies to OpenGL's perspective view.
I guess this post using gluUnproject answers the question.

Emulating constant size of point sprites using OpenGL ES

I am trying to emulate point sprites in Android using OpenGL, specifically the characteristic of the size staying the same if the view is zoomed.
I do not want to use point sprites themselves as they pop out of the frustum when the "point" reaches the edge, regardless of size. I do not want to go down the route of Orthographic projection either.
Say I have a billboard square with a size of 1, when the use zooms in, I would need to decrease the size of the square so it looks the same size, if the user zooms out, I increase it. I have the projection and model matrices to hand if these are required as well as the FOV. My head just goes blank every time I sit down and think about it! Any ideas on the necessary algorithm?
Ok, By changing field of view to zoom into the environment I divide the quad size by (max_fov / current_fov). It works for me.

Categories

Resources