how to get the absolute translation in a 2D opengl es scene - android

I have a pannable/zoomable image rendered in a 2d opengl es 1.1. After zooming, how can i get the absolute position from the left side of the image to the left side of my viewable area?
Heres an image to clarify what i mean:

Well I don't know much about IOS, but logically speaking, depending on where the origin is and how your image is being drawn onto the screen, you could just calculate the distance from the x position of the image. For instance, if your origin is on the top left of the screen (0,0), and your image is being drawn from the top left (eg. (-10, 0)), then your distance would simply be the absolute value of the x position of your image (eg. 10 units). However, if your origin was lets say, at the center, then you would have to take the screen size into account and subtract half of the width to achieve the distance. It may sound confusing but it should just be basic math.

Related

Calculate pixel offset with Projection and View matrices

I am developing an app where the user can draw onto a Bitmap (with MotionEvents and Canvas), which then gets projected onto an OpenGL Surface as a texture. The issue is, I'm having some trouble to correctly offset the pixel where the user touched (Position [x,y] from the top-left corner of the screen, given by MotionEvent), onto the correct place that the Bitmap should be drawn, considering both the position and scaling of the View Matrix (maybe the projection as well). I'm unsure if there's a simple way to mathematically relate screen pixels with GL's Normalized Device Coordinates, which would make things easier.
Example:
Let's say the user touched the top-left corner of the screen (0,0). From view transformations (scaling and translating), however, such corner happens to be exactly at the center of the projected Bitmap. How can I offset this (0,0) position so that it draws in the center of it?
(Also, if what I'm trying to do is extremely impractical, and there's a much easier way, it would be welcome to know).

Android - Display text over specific position of image

In my Android app I have an ImageView which displays an image from assets. What I need to do is to display a text almost centered in the image (what I mean by almost centered is aligned to a rectangle which is near from the center of the image).
I know the (x, y) coordinates of this point (in pixels), but when I switch from device to device, of course, this pixels vary.
I have tried using dp, with no luck.
So, what I want to do is to display text inside this rectangle. I have the (x, y) coordinates of each point of the rectangle (in pixels relative to the emulator).
How can I do to make that design responsive?
Thank you
Put the image in a relativelayout... And add the textView(with centerInParent) to the relative layout...
Try making the position relative to the Image not the Emulator.
So when you go across different devices you can multiply both values by the same scale factor.

Viewport boundaries on any resolution with OpenGL ES

I'm having difficulties understanding about the OpenGL perspective view. I've read tons of information however it didn't help me trying to achieve what I'm after. Which is making sure my 3d scene is filling the entire screen on every Android device.
To test this, I will be drawing a quad in 3d space which in the end should touch every corner, filling up the whole device's screen. I could then use this quad, or actually its coordinates to specify a bounding box at a certain Z distance which I could use to put my geometry and making sure those fill up my screen. When the screen resizes, or I am to run it on another screen resolution, I would recalculate this bounding box and geometry. I'm not talking about static geometry, but for instance say I want to fill the screen with balls and it doesn't matter how big or how many balls there are, the only important thing is the screen is filled and there are no redundant balls outside the visible frustum.
As far as I understand when specifying the viewport you actually bind pixel values to the frustum's boundaries. I know that you can actually set an orthographic view in a way your window pixels match 3d geometry position but I'm not sure how this works in perspective view.
Here I'm assuming the viewport width and height to be mapped to the nearZ. So when a ball is at Z=1f it has it's original size
When moving the ball into the screen so into the direction of farZ, the ball would be scaled down in order for the perspective to work. So a ball at Z=51f for instance, would appear smaller on my screen and I would need more balls to fill up the entire screen.
Now in order to do so, I'm looking for the purple boundaries
Actually I need these boundaries to fill the entire screen on different frustum sizes (width x height) while the frustum angle and Z distance for the balls is always the same
I guess I could use trig to calculate these purple boundaries (see blue triangle note)
Am I correctly marking the frustum angle, it being the vertical angle of the frustum?
Could someone elaborate on the green 1f and -1f values as I seem to have read something about it? Seems like some scalar that is used to resize the geometry within the frustum?
I actually want to be able to programmaticaly position geometry against the viewport borders within 3d space at any resolution/window for any arbitrary Android device.
Note: I have a flash background which uses a stage (visual area) with a known width x height at any resolution which makes it easy to position/scale assets either using absolute measurements or percentual measurements. I guess I'm trying to figure out how this system applies to OpenGL's perspective view.
I guess this post using gluUnproject answers the question.

Texture Atlas in OpenGL ES 2.0

I am working on a simple project with OpenGL ES 2.0. It's gone fairly well, but I seem to have hit a spot that is seemingly poorly documented for us beginners. That being, I am trying to utilize a texture atlas. I have searched around a bit, but I can't seem to find any full code examples. Most search results lead to people giving the very basic idea of what they are and how to use them, but never a full example that I can really study.
At the moment I am just trying to load in a set of four or five images from one image atlas and apply them to a single triangle strip. I can section out a specific part of the image as I want, I just can't find any examples on applying more pieces of that image to the same triangle strip.
I don't necessarily need a full tutorial on this (I wouldn't mind one!), but if somebody could point me to some example code that does something similar I'd be quite happy. Thank you very much in advance!
A texture atlas is no different that any other image you load and render using OpenGL, the trick is to adjust the texture coordinates of each vertex of your polygon(s) to include a smaller triangle/rectangle inside that image.
In OpenGL the coordinates of an image start at (0,0) - the lower left corner and end at (1,1) - the top right corner. If you want to map only a region of the image to your polygon assign the texture coordinates by using a normalized size (0.0 - 1.0). i.e the middle point of the image would be at coordinates (0.5, 0.5).
To display a triangle strip that renders a rectangle using only the half of the image, your texture coordinates will have to be similar to this:
(0.0, 0.0) vertex at lower left corner of rectangle
(0.0, 1.0) vertex at top left corner
(0.5, 0.0) vertex at lower right corner
(0.5, 1.0) vertex at top right corner

Emulating constant size of point sprites using OpenGL ES

I am trying to emulate point sprites in Android using OpenGL, specifically the characteristic of the size staying the same if the view is zoomed.
I do not want to use point sprites themselves as they pop out of the frustum when the "point" reaches the edge, regardless of size. I do not want to go down the route of Orthographic projection either.
Say I have a billboard square with a size of 1, when the use zooms in, I would need to decrease the size of the square so it looks the same size, if the user zooms out, I increase it. I have the projection and model matrices to hand if these are required as well as the FOV. My head just goes blank every time I sit down and think about it! Any ideas on the necessary algorithm?
Ok, By changing field of view to zoom into the environment I divide the quad size by (max_fov / current_fov). It works for me.

Categories

Resources