on Android, there's a Canvas class that represents a drawing surface. It has a clipping rect. Question - are the rect's right and bottom borders inclusive or exclusive? In other words - if the rect is (0, 0)-(10, 10), will the Canvas allow drawing in pixels at coordinates 10?
According to another StackOverflow question, right and bottom are exclusive, but top and left are inclusive.
As I say in my answer there (which I suppose is really a comment), this is consistent with other Java API, and has other benefits.
So, no, you won't be able to draw at ordinate 10. But it does mean that your Rect is a 10×10 pixel square.
Also, calculations are simpler, like:
int width = rect.right - rect.left;
int height = rect.bottom - rect.top;
Just for example, I know we have .getWidth() and .getHeight() methods.
Related
I have a question regarding transformations in OpenGL ES 2. I'm currently drawing a rectangle using triangle fans as depicted in the image below. The origin is located in its center, while its width and height are 0.6 and 2 respectively. I assume that these sizes are related to the model space. However, in order to maintain the ratio of height and width on a tablet or phone one has to do a projection that considers the proportion of the device lengths (again width and height). This is why I call orthoM(projectionMatrix, 0, -aspectRatio, aspectRatio, -1f, 1f, -1f, 1f);and the aspectRatio is given by float aspectRatio = (float) width / (float) height. This finally leads to the rectangle shown in the image below. Now, I would like to move the rectangle along the x-axis to the border of the screen. However, I was not able to come up with the correct calculation to do so, either I moved it too little or too much. So how would the calculation look like? Furtermore, I'm a little bit confused about the sizes given in the model space. What are the max and min values that can be achieved there?
Thanks a lot!
Vertex position of the rectangle are in world space. A way to do this it could be get the screen coordinates you want to move to and then transform them into world space.
For example:
If the screen is 300 x 200 and you are in the center 0,0 in world space (or 150, 100) in screen space). You want to translate to 300.
So the transformation should be screen_position to normalized device coordiantes and then multiply by inverseOf(projection matrix * view matrix) and divided by the w component.
Here it is explained for mouse that it is finally the same, just that you know the z because it is the one you used for your rectangle already (if it is on the plane x,y): OpenGL Math - Projecting Screen space to World space coords.
Background
I'm drawing a custom View, which consists of an arc along which images are drawn.
A bit like this "Wheel of Fortune" screenshot, where only part of a large disc is visible and, as the user drags the view, images become visible/hidden as appropriate and are drawn at the appropriate position and angle along the disc's edge.
This works fine; I use the code below to create a large bounding box (four times the width of the view, to get a more subtle arc), which I use with Path.arcTo() to draw the visible top edge of the disc.
Because the bounding box is square, the arc drawn (if I were to draw 360°) would be circular.
// Disc dimensions (based on this View's width/height/padding)
final int radius = width * 2;
final float halfWidth = width / 2f;
final float top = mTopPadding;
// Create a large, square bounding box to draw the disc in.
// Centre horizontally; top edge of the disc == top edge of this View (+ padding)
final RectF discBounds =
new RectF(-radius + halfWidth, top, radius + halfWidth, radius * 2 + top);
// Create an arc along the circumference of the disc,
// but only where it will intersect with this View
double arcSweep = Math.toDegrees(Math.asin(halfWidth / radius)) * 2;
double startAngle = 180 + ((180 - arcSweep) / 2d);
mDiscPath.reset();
mDiscPath.arcTo(discBounds, (float) startAngle, (float) arcSweep);
// Close the shape so that we fill the rest of this View
// (the area underneath the arc) with the disc bg colour
mDiscPath.lineTo(width, height);
mDiscPath.lineTo(0, height);
I then create another Path and again call arcTo(), using the exact same bounding box so that the same arc radius is maintained.
This time the sweep angle of the arc is longer, since there may be only two or three images shown within the View at one time, but an arbitrary number of images off-screen (in my case, up to about ten).
// Create another arc, along which the images should move,
// based on the number and width of the images.
// We will later use a PathMeasure object created from
// this Path to determine where to draw each image
arcSweep = (mTotalWidth * 180) / (radius * Math.PI);
startAngle = 180 + ((180 - arcSweep) / 2d);
mImagePath.reset();
mImagePath.arcTo(discBounds, (float) startAngle, (float) arcSweep);
Problem
In onDraw(), the mDiscPath is drawn as the background (canvas.drawPath(mDiscPath, fillPaint)), and then the appropriate bitmaps are drawn based on a PathMeasure object created from mImagePath and how far the user has dragged.
However, it's noticeable that the images do not precisely follow the expected path as the disc is "rotated". This causes problems, as the images need to align accurately to the edge of the disc.
For troubleshooting, I started drawing mImagePath using canvas.drawPath(mImagePath, strokePaint)) to see why the image path didn't seem to follow the disc path.
In the screenshot below, to make the problem more obvious, the regular bitmaps are not drawn on top of the disc, and mImagePath was translated downwards by 4dp (i.e. the problem is also visible when not translated).
Here we can see three independent instances of the custom View stacked on top of one another.
But it's clear that the black line (mImagePath) does not match the radius of the top of the coloured disc (mDiscPath) in each case. i.e. The radius of the black arc appears to be large than the disc's radius.
The arcs for both Path objects were created using the same bounding box, so I would expect both arcs to have the same radius.
The line on the bottom disc seems to match up well, but the top two discs are clearly wrong.
The only real difference between the discs is the number of images displayed, and therefore the sweep angle of the image path (89°, 169°, 222° respectively for the three views).
Question
Why, if the exact same square RectF bounding box is being used to create two Path objects, why do arcs drawn from these Paths have different radii?
Am I missing something? Should I be using a different API?
Postscript
I've ensured the bounding box is correctly sized and doesn't change between creating the two paths.
The start and sweep angles look correct in all cases (i.e. the midpoint of each arc is at 270°).
Creating brand new Paths or resetting the existing Paths makes no difference.
Using the same arc sweep for both Paths does work as expected.
I've tested on various devices and orientations, with and without software rendering.
I looked at the H&M Android app and trying to figure out how to implement some widget.
Can anyone have an idea how this image frame is implemented?
I can guess that it using openGL.
A transparent png frame? Which could also be nine-patch!
I will venture a guess ;)
First the front image is created. In this case, it is built by inflating a linear layout with an ImageView and a TextView. Then this is cached to a Bitmap (at setup phase, not draw time).
In onDraw, that bitmap is drawn to the screen. Then the canvas is clipped to avoid drawing that area any more. Saves a lot of drawing time to not do a quadruple overdraw of transparent pixels.
Then the backgrounds are drawn like this:
for(int i = NUMBER_OF_LAYERS - 1; i > 0; i--) {
canvas.save();
float rotation = MAX_ANGLE * shiftModifier * ((float) i / (NUMBER_OF_LAYERS - 1));
canvas.rotate(rotation, mImageHalfWidth, mImageHalfHeight);
paint.setAlpha((int) (255f / (2 * i)));
canvas.drawRect(mBitmap.getBounds(), paint);
canvas.restore();
}
NUMBER_OF_LAYERS is the number of backgrounds layers.
MAX_ANGLE is the rotation angle for the most tilted layer.
shiftModifier is used to animate the background layers. It moves from zero (background completely hidden) to one (background angle = MAX_ANGLE).
paint is just a Paint with color set to white.
I am working on an Android Application in which a 3d scene is displayed and the user should be able to select an area by clicking/tapping the screen. The scene is pretty much a planar (game) board on which different objects are placed.
Now, the problem is how do I get the clicked area on the board from the actual screen-space coordinates?
I was planning on using gluUnProject(), as I have access to (almost) all the necessary parameters. Unfortunately I am missing the winZ param, and cannot get the current depth as the touch event is occurring in a different thread than the GL-thread.
My new plan is to still use gluUnProject, but with a winZ of 0, and then project the resulting point onto the board (the board stretches from 0,0,0 to 10,0,10 in model space), However, I can't seem to figure out how to do this?
I would be very happy if anyone could help me out with the maths needed to do this (matrices were never my strongest side), or perhaps find a better solution.
To clarify; here is an image of what I want to do:
The red rectangle represent the device screen, the green x is the touch event and the black square is the board (grey subdivisions represent a square of one unit). I need to figure out where on the board the touch has happened (in this case it is in square 1,1).
As you are working in 2D basically already, (I presume you mean your 3D board stretches from 0,0,0 to 10,10,0 (x,y,z).) you could translate and interpolate/extrapolate the 2D/3D space coordinates from your screen space coordinates without the gluUnProject(). You will need your screen resolution, and to pick the resolution of the 3D space grid you wish to convert to. If both the screen and 3D space origins are aligned (0,0 screen space is at 0,0,0 3D space), and your screen dimensions are 320x240, using your existing 10x10 3D grid, then 320/10 = 32, and 240/10 = 24, thus the screen space size of a single 1x1 area is 32x24. So if the user presses on 162, 40, then the user is pressing within ( 5, 1, 0) (162/32 >= 5 but < 6, 40/24 >= 1 but < 2 ) in the 3D space. If you need greater resolution than this you can select a higher 3D space grid resolution (i.e. using 20 instead of 10). You don't need to update the GL matrix to use this factor. Though it may make it simpler in some ways, I'm sure from a modeling perspective you would have additional work to do. Just be aware for a factor like 20, 1,3 would be at (.5, 1.5, 0). If your screen and 3D space origins are not already aligned will need to translate the screen space coord prior to this. If 0,0 screen space is 10,10,0, you will need to take your screen resolution and subtract the current point from it, making 0,0 into 320, 240 in this example, our example point of 162, 40, would be 158 (320-158 == 162), 200 (240-200 == 40).
If you'd like an overview of the projection matrix and how that all works, which could help you understand where to put the screen space dimensions in the unproject matrix, read this chapter of the OpenGL red book. http://www.glprogramming.com/red/chapter03.html
Hope this helps, and good luck!
So, I managed to solve this by doing the following:
float[] clipPoint = new float[4];
int[] viewport = new int[]{0, 0, width, height};
//screenY/screenX are the screen-coordinates, y should be flipped:
screenY = viewport[3] - screenY;
//Calculate a z-value appropriate for the far clip:
float dist = 1.0f;
float z = (1.0f/clip[0] - 1.0f/dist)/(1.0f/clip[0]-1.0f/clip[1]);
//Use gluUnProject to create a 3d point in the far clip plane:
GLU.gluUnProject(screenX, screenY, z, vMatrix, 0, pMatrix, 0, viewport, 0, clipPoint, 0);
//Get a point representing the 'camera':
float eyeX = lookat[0] + eyeOffset[0];
float eyeY = lookat[1] + eyeOffset[1];
float eyeZ = lookat[2] + eyeOffset[2];
//Do some magic to calculate where the line between clipPoint and eye/camera would intersect the y-plane:
float dX = eyeX - clipPoint[0];
float dY = eyeY - clipPoint[1];
float dZ = eyeZ - clipPoint[2];
float resX = glu[0] - (dX/dY)*glu[1];
float resZ = glu[2] - (dZ/dY)*glu[1];
//resX and resZ is the wanted result.
I'm writing my first app of any consequence, so I may be going about this the entire wrong way, but...
I have a resource image that is 1600x880. I'd like to fill the entire screen with a subset of that image to my canvas, such that an arbitrary x,y coordinate marks the top-left corner drawn at the top-left corner of the screen. For instance, if I was viewing this image on an N1 and I entered x=100 and y=50, I'd expect to see from 100,50 to 580,850 since it's display area is 480x800.
I think I need to use Canvas.drawBitmap(Bitmap bitmap, Rect src, Rect dst, Paint paint). But, no matter what I plug in to either Rect (even if it's a perfectly sane set of values that shouldn't butt up against any edges of the image), I end up with an unexpected area or a grossly stretched/smooshed output.
I've tried using various combinations of calculations involving display.getWidth() and getHeight(), canvas.getWidth() and getHeight(), and bitmap.getWidth() and getHeight() but nothing seems to be working.
I don't know what I'm doing wrong.