I am try to implement a hand-painted app by android (like infinite design )
and decide to use vector because it can scale and not distortion.
I think a lot and try to use the mode of viewport-horizon-world
Viewport which rely on the sizeof phone (etc 1080*1920) it is the path you see and you touch
Horizon the thing which will display on viewport
World the real coordinates of the point(which make up the path etc line
,bessel).
this model works like
first you touch the screen of phone and horizon will translate the point to the real world coordinates (if you move of scale) and save the value to world
second you can move and scale by gestures it will change the attribute of horizon etc you move left 100 and down 100 the horizon will know now it offset (100,100) and the bound will change ((0,0),(1080,1920))->((100,100),(1180,2020))
last when draw i find the path which include in horizon (calculate the bound of horizon and the bound of path) then calculate the display coordinate rely on horizonand draw the path by canvas.draw() etc
now the problem is when i just offset the horizon ,calculate the display coordinate just need to plus the offset values.but when scale it become difficult.for example a path bound in ((0,0),(100,100)) and the horizon scale 0.5 in point (500,500) i don't know the position of the bound and don't know how to calculate the anchor of new path the size and width(maybe just multiplied by the scale factor)
the function i want to implement like the viewport in svg
i think it should use coordinate mapping but how?
please give me some clue
Related
I need a basic idea for how can i warp image on touch of a particular area. Image filters apply warp on whole image but i want to warp single point, like if i want to warp eye of a person then i will touch on that point. So I need a basic idea about this work.
I have tried this one but its also applies filters on whole image.
https://github.com/Jtfinlay/PhotoWarp
App:
https://play.google.com/store/apps/details?id=hu.tonuzaba.android&hl=en
A warp is not just at a "single point" but over some area that you deform in a smooth way.
To achieve this, you need a geometric transform of the coordinates that works in some neighborhood of the touched point. One way to do this is by applying a square grid on the image and moving the grid nodes around the touched points with some law of yours (for instance, apply a displacement vector to all nodes, with a decaying factor such that far away nodes don't move).
Then you need a resampling function that computes the new coordinates of every pixel and copies the color of the source pixel.
For good results, you must actually work in reverse: scan the destination image and for every pixel retrieve the source coordinates and source pixels. Apply bilinear or bicubic resampling to avoid aliasing.
For ease of implementation, the gridding idea should be adapted as well: rather than deforming the destination grid, keep it unchanged and apply the inverse deformation to the source grid.
Last thing: in the grid approach, see the displacements of the grid nodes as two scalar functions DX(i, j) and DY(i, j) that you can handle separately. From the knowledge of the displacements at the nodes, you can estimate the displacement of any pixel by interpolation (bicubic would be appropriate here).
you can use canvas to detect that portion and stop action on that portion in ontouchlistener
code sample
Bitmap pricetagBmp = BitmapFactory.decodeResource(getActivity().getResources(), R.drawable.ic_tag_circle_24dp);
// canvas.drawBitmap(pricetagBmp,left + (right - left) / 2, top + (bottom - top) / 2 - (bounds.height() / 2),circlePaint);
float imageStartX = (left + ((right-left)/2)) - (pricetagBmp.getWidth()/2);
float imageStartY = (top + ((bottom - top) / 2)) - (pricetagBmp.getHeight()/2);
canvas.drawBitmap(pricetagBmp, imageStartX, imageStartY,circlePaint);
and in ontouchlistener if that points detected you can perform no action
Note: you can replace drawBitmap with drawRect or something else with invisible color
I find that the unit of the coordinate system of Canvas is different from that of screen.
For example in my case as below:
For one particular point, its on-screen coordinate obtained from ImageView.getX() and ImageView.getY() is (336, 578).
Then by trial and error, I draw a dot on the Canvas so that this dot falls EXACTLY the same position as the ImageView. I called canvas.drawCircle(330, 440, radius, paint); to achieve this.
Here comes the question:
Why would the 2 coordinates, (336, 578) and (330, 440), different?
Is it because the screen and the canvas use different units?
Is it an issue regarding pixel, dp and all that?
You could try converting to and from window coordinates using View.getLocationInWindow this way you'll always be sure you are using the same coordinates system.
Coordinates of ImageView.getX an Y are relative to the parent. Canvas coordinates are relative to the view. This could cause the differences
Edit:
I assume in one place you get the coordinates of the view (using getX and getY). Convert these to window coordinates. Call these viewXY[].
When you need to use them, to draw in your canvas, also convert the top and left coordinates to window coordinates. We call these canvasXY[].
Now, the X position to draw at would be viewXY[0] - canvasXY[0], and the Y position would be viewXY[1] - canvasXY[1].
A view's position co-ordinates are always relative to it's parent. It's like saying, where with respect to it's parent is this view placed.
There in another thing, screen Co-ordinate. This says where with respect to the window top/left (think actual physical screen) is this object placed. This can be obtained through View.getLocationOnScreen.
If you want to compare two locations, they will only be equivalent if the parent view of both these locations is same or you're using absolute screen co-ordinates. Otherwise, you'll only get results that "seem" different.
If you want to unify the two either take them to a common parent or use absolute co-ordinates.
I'm testing the drawing of an XY graph on my Android tablet. It's a Samsung Galaxy Tab 2 (7") running ICS.
I've created a View subtype with an overridden onDraw method. Its job is to simply plot an array of (x,y) coordinates as a series of connected line segments. I've got a float array representing the y values, and the x values are the array indices. The y values extend from -1 to 1 and there are about 10 values. Pretty simple.
The target canvas is a square on the screen, say about 480 by 480 pixels, with +1 intended to be at the top of the screen and -1 at the bottom, and the 0th value at the extreme left and the Nth value at the extreme right.
Thus, the transformation from "world coordinates" to "screen coordinates" along the X and Y axes is not uniform. In my onDraw method, I apply a translate, a scale, and then another translate operation to the Canvas object, and then I proceed to draw the line segments using a Paint pen having a hairline stroke width of 0.
The result is a graph that's not hairline in width. Obviously, my scale operation is thickening the line segments so that gently sloping lines appear thicker than steep ones. When I change my world-coordinates extents so that they're equal along both axes (to match the square canvas), then this problem disappears.
Interestingly, this problem occurs on the tablet, but not on the Android ICS emulator.
Any thoughts on this would be appreciated. My preference is to have a hairline graph no matter what the transformation is.
The obvious work-around to this issue is to leave the canvas's matrix untouched and perform the translate/scale/translate operations myself (converting from world to screen coordinates), and then use the screen coordinates for drawing, utilizing a Paint pen with a width of 0.
Assuming that i have an android phone with max x=400 and max y=300
I have a 5000 by 5000 px bitmap and i display a 400 by 300 part of it on screen. I scroll it around using touch events. I want to draw a smaller bitmap(30 by 30px) onto that larger bitmap at location (460,370). But i cant statically do it(as my game requires). I want to draw and undraw that smaller image according to the player's input or collision detections.
What i am confused about is that suppose currently (50,50) to (450,350) of the larger image is displayed on the screen and now ,due to a certain requirement of the game at this moment, i have to draw that 30by30 bitmap at (460,370) but this point is in two systems - 1)In the big bitmap's co-ordinate system it's 460,370 but when it will come inside the visible area of screen it will have values something between (0,0) and (400,300) depending upon its position on the screen as per the player's movements...
So how do i track that image- i mean using which co-ordinate system, and how do i go about doing all this?
I know I've put it here in a very confusing way but that's the best way i can put it and i'll be really glad, and grateful to you if somebody could help me on this.....
Terminology:
Let's call the coordinates on the big map world coordinates
Let's call the coordinates on the screen view coordinates. You are talking to the drawing system (canvas?) in view coordinates.
Your currently displayed region of the bigger map is 100% defined by an offset, let's call this (offsetX, offsetY). Formally, this offset maps your world coordinates onto your view coordinates.
If you have the small image displayed at position (x, y) in world coordinates, all you need to do is subtract your offset to get to view coordinates, i.e.
viewX = worldX - offsetX;
viewY = worldY - offsetY;
I am trying to learn opengl stuff on Android. In the gl.gltranslatef(x,y,z) call, I am shifting my texture by some units in the +ve x direction. But I am unable to find the number of pixels does 1 unit of x belong to?
Here is what I am doing:
I call gl.glviewport(0,0,width,height); // This will set my rectangle with 0,0 as lowerleft corner and then extend it to accommodate width and height.
Then
I call to gl.glfrustrum(-5,5,-7,7,3,7); // I am little confused how this call is using the dimensions I set in gl.glviewport.
How will -5 to 5 units from left to right in the above call, translate to pixels on the screen of android?
I mean if width = 320 and height = 533 pixels, then what will be the number of pixels occupied on the screen due to the gl.glfrustrum call?
I am experimenting in the gl.gltranslatef call by specifying xshift as 5.0, but it does not translate the bitmap at the right or left corner of the screen, when I increase it to 6, part of it is still visible on the screen.
Thanks
Siddhesh
In short, I am searching for the maximum number of units (in terms of X) which will represent extreme corners of my android phone screen.
glViewpoint tells it what rectangle (in pixels) your OpenGL output should be displayed in.
glFrustum tells it what coordinates in your "world" units should be mapped to that viewport.
An important point: your glFrustum call includes not only a height and width, but also a depth. Since you are specifying a Frustum, not a cube, that means anything with a Z coordinate anywhere but the very front of your frustum will be scaled down appropriately for its distance from the viewer.
As such, when you to a glTranslatef, the distance by which a particular object will move (in terms of pixels) will depend on its distance from the viewer. The further away it is from the viewer, the fewer pixels a particular sideways or up/down will translate to.
Depending on what else you're doing, one easy way to deal with this might be to use glOrtho instead of glFrustum. glOrtho gives orthographic mode, which means no perspective scaling is done, so a given X or Y distance will translate to the same number of pixels, regardless of distance from the viewer.