I've got the following problem:
Currently, I draw a picture within a SurfaceView. Moreover, the user is allowed to
draw on this picture. Thereby the draw "figures" are handled as path-objects. In addition, a user can pan as well as zoom the image. The handling of the zoom and pan is done by using a matrix. The picture as well as every path object has the its own matrix.
So, if the user for example started the app, a picture is shown. Then, the user can add one path object after the other and so on. We create for every object a new matrix. For example, if a user zooms in then the path objects as well as the picture are zoomed.
Now my goal is that if a path object is added while the picture is zoomed in or zoomed out, then the path object should be added at the touched position but it should be scaled to the scale of the other already drawn objects. In other, simpler words: If I add a path object it should have the same scale as the already exisitng objects, but it should appear at its new position:
Until now I tried different ways for solving the problem, but nothing did work:
First, of all I scaled the Canvas with one single matrix instead of scaling every path object on its own. But then, the added path objects appeared at a wrong position while the picture was zoomed in.
Next, I tried to get the current scale and afterwards, I scaled the path object when I created it. This did not work either. For non of the methodes preScale,postScale,setScale ... Either the whole path was wrong or it appeared again at a wrong position.
Does anyone know a solution?
Related
I have a fullscreen custom view in my activity where I have to draw paths that are formed by a line each one. Since my view supports scaling, depending on how zoomed in the view is, the paths get to big and therefore the view becomes slow, even crashing sometimes. What I want to know is if there's some built-in function where I can draw only the part of the path that is actually inside the canvas.
I've tried using Path.op with a rect the size of the screen, but didn't get any results; it either returns an empty path or a path that contours the screen. Same thing with Canvas.clipPath(). Passing a rect on through invalidate() also doesn't work for me since the said path IS insterecting the drawing area.
It's also worth metioning that I'm using paths because I need to use a PathDashPath effect. So, if there's a way os doing that with drawLine, it'd also help.
In android I am drawing path creating a custom view and using onDraw() method. But after drawing many paths, I want to identify the paths differently and transform each of them differently. I have identified whether a point is on path or not using rectF and region, but to get contained path (using getBoundaryParh()) in the region is not helping me. So can someone help me with this, I just want to get the object of the selected path by touching on the screen. Thanks.
I'm struggling with something I would expect to be straight forward with libgdx.
In short this is a "finger paint" app where I want to draw a path of a certain width where the user touches the screen.
I've earlier done this by using a plain Android android.view.View. I had a android.graphics.Path in which I stored the coordinates of the user's current touch. In the onDraw() method of the view I drew the path to the android.graphics.Canvas. Whenever the user released a finger I drew the path to an offline canvas/android.graphics.Bitmap which also was drawn in the onDraw() method. Plain and simple.
How can that be done using libgdx?
I have tried using a com.badlogic.gdx.graphics.Pixmap that I can draw a line to whenever the user moves a finger. This works well except the fact that I'm unable to control the witdh of the line using Gdx.gl.glLineWidth(). I know I can draw a rectangle instead of a line to set the width, but Pixmap doesn't seem to have any means of rotating, so I don't see how this can be done.
I can use a com.badlogic.gdx.graphics.glutils.ShapeRenderer for drawing lines (or rectangles) in com.badlogic.gdx.Screen.render(). As far as I can see I then need to store every single touch point of the current touch, and then draw all lines on render. Whenever the user relases a finger I guess I can store the screen as-is with something like com.badlogic.gdx.utils.ScreenUtils.getFrameBufferPixmap(). Hopefully there is a easier way to achieve what I want.
I ended up drawing circles on a pixmap where the line should be:
One circle on touchDown
Several circles from last touch point to next touch point reported to touchDragged
I'm not very happy with this solution, but it kinda works.
Maybe you can calculate line dot coordinates manually and use triangles to draw them on pixmap? Like use 2 triangles to form (rotated) box?
I am using the Multitouch Controller made by Luke Hutchison found here
https://code.google.com/p/android-multitouch-controller/
My problem is this. I can place an image onto the canvas. I can select it, move it, rotate it and scale it perfectly fine. The problem is I also allow the user to move the canvas by translating it on drag. On the screen the images seem to appear in the correct location following the translation of the entire canvas. The problem is trying to select them afterwards.
I don't know if it's an issue with the touch point after the canvas translation not taking into account the new canvas location/offset or the location of the images haven't been updated with the canvas translation. Does anyone have any experience with this or be able to point me in the right direction!?
I want to be able to dynamically place image(s) over another image in my app.
Consider the first image as background and the other images to be on top level, I will also need to move those top level images (change their x and y on the screen) by code too.
Imagine, for example, a sea in which the user places fish and sea animals, then those sea animals start to move here and there on the screen: it will be like that.
How can I do this? If you don't know but remember any simple program or demo that does that, it will be also very welcome!
Thank you!
There is, of course, more than one way to do this, but I would say that the best way to do it would be to create a custom View (class that derives from View) and have this handle your bitmap drawing and all of your touch events.
There's a lot of code to write for loading the Bitmaps and keeping track of all of their positions (and then drawing them to the canvas in onDraw), but if you start really small by just allowing one image to be drawn and dragged around the screen, you can build on that and keep your code organized.
You would need to override onDraw(Canvas) and onTouchEvent(MotionEvent) in your custom View. You'll load your bitmaps with BitmapFactory (decodeResource method if you're including your images as resources in your project) and you'll need to remember to call recycle on your bitmaps when you're no longer using them.
In onDraw, you draw your bitmaps to the canvas at a specific location using Canvas.drawBitmap. There are two overloads of this method you can choose from, one that takes the top and left coordinates of the bitmap as floats (and therefore performs no scaling or stretching) and one that takes a destination and source rectangle to perform scaling, stretching and placement.
I always use the latter as it gives me finer tuned control. If you choose this route, you'll want to keep two Rect instances and a Bitmap instance for each image being drawn, update them in the touch events and draw them to the canvas in the draw event.
When something changes inside your view (as in the case of a touch event), call invalidate() method and the framework will know to redraw everything which triggers your onDraw method.