Android canvas is it possible to draw using vector data - android

The most common way to draw with your finger on screen is to use android's built in canvas, the usual steps are:
1) Create a custom view (drawing view).
2) Set a bitmap to the canvas.
3) initialize a path (set paint, ...).
4) implement onTouchEvent (DOWN/UP/MOVE) to get position points (X && Y).
5) call onDraw() to draw each path according to the X && Y.
so my data here is X && Y (the finger position on screen).
Problem
Some note apps, use a special technique called (drawing with vectors), at first I didn't get the idea (because all I know is canvas and the old way of finger drawing).
After research
I researched a bit and found out that vectors are graphics that scale without losing quality.
I found out a similar post to my problem this one.
If you opened the link and read the answer you will see that #Trevor stated a very good answer on that. He said that you must get the path data using your finger and store them as vector data in memory.
Store them as vector data in memory??
Okay what is that supposed to mean, where should I store them and what is the format knowing that I only can use the finger to get two float positions X && Y...so how I save them as vector? and where should I save them?
If I was able to save them
How to retrieve them and draw them back using canvas onDraw() method?
Do you think this is the meaning of drawing with vectors? If yes, What is the purpose?
I am looking at a way to enhance drawing technique on canvas and manipulate the drawings to make them feel more realistic.
Thanks for your help.

Scalable Vector Graphics (.svg) is a simple file format based in HTML. You would probaly generate a .svg element on the fly from user inputs, which you can then draw to the screen again (with appropriate libraries) and also can save / reload into your application very easily.
Here is a tutorial on svg paths, which sounds like what you would want; Bézier curves make for a good interpolation between your data points.

Related

Android: What to use for rendering the layers of photo-editor?

Hello friends!
I need to create a photo editor, which allow to put the emoticons, text and drawing with a brush on the picture.
Open the illustration
An editor must be able to change the position of smiles & text, their size and rotation with two fingers (multi-touch).
Mechanics is clear to me. I found ready realization of multi-controller:
https://github.com/lukehutch/android-multitouch-controller
But I don't understand how better visualize all the layers in terms of performance:
Layer 3 - text
Layer 2 - emoticons
Layer 1 - drawing
Layer 0 - photo
I am afraid to use the canvas, without your opinion. I heard that the canvas buggy when displaying a large number of images.
I found examples visualize the layers of images using layer-list with the 's inside. I think this method will be more performance numbers for my task.
But I have not found documentation of how to update the position (top / left) when you move an item.
My question is: What is the best use for the visualization of all layers and the possibility to save the final image (merge all layers)?
Please help, what to choose and what is the right path!
Thank you in advance! :)
Canvas is not buggy. It's the only way for you to render things onto a Bitmap. By the looks of your requirement, I think you need to draw your Layers onto different bitmaps. Layer 0 will be your default bitmap. Every other layer will be individual bitmaps on their own. The reason they have to be on a bitmap of their own is so that you can move them as you wish.
You final merge will be to draw all these bitmaps, on the default bitmap via Canvas.drawBitmap() call.

Using vector graphics on android

I'm currently attempting to make a simple 2D CAD-type viewer app for Android. Basically the input file contains a bunch of primitives (rectangles, lines, circles, octagons, that type of thing), and the goal is to draw these to the screen at whatever coordinates/sizes they offer.
My initial instinct is to use a Canvas to draw these to, using a quadtree or some similar structure to track which items will show up on the screen at any given time.
Does anyone have any recommendations here for a better way to implement this (my graphics programming experience is minimal, and hence I'm having problems even finding a starting point to Google from)?
Thanks in advance,
-Ross
That's a very broad question so my answer will only point at classes that you should be looking at.
Extend a surfaceView to be your cadView, that way you'll be all the calculation outside the main thread.
you'll still have to draw on the canvas.
from the canvas you can getWidth() and getHeight() and use those values for base comparison on your positions.
Canvas have some primitives drawings types like arc, circle, point.
Further you can use Path to draw full figures, line, filling, quadratic, etc.
for backgrounds you can create color drawables and draw it on the canvas.
and that's pretty much it.

To move an image towards 3 dimensions in android application

I want to move an image in in 3 dimensional way in my android application according to my device movement, for this, I am getting my x y z co-ordinate values through sensorEvent,But I am unable to find APIs to move image in 3 dimesions. Could any one please provide a way(any APIs) to get the solution.
Depending on the particulars of your application, you could consider using OpenGL ES for manipulations in three dimensions. A quite common approach then would be to render the image onto a 'quad' (basically a flat surface consisting of two triangles) and manipulate that using matrices you construct based on the accelerometer data.
An alternative might be to look into extending the standard ImageView, which out of the box supports manipulations by 3x3 matrices. For rotation this will be sufficient, but obviously you will need an extra dimension for translation - which you're probably after, seen your remark about 'moving' an image.
If you decide to go with the first suggestion, this example code should be quite useful to start with. You'll probably be able to plug your sensor data straight into that and simply add the required math for the matrix manipulations.

Android Path collision problems/solutions

I have a drawing application in Android that allows the user to draw with their finger, and then stores the resulting shapes as Android Paths. To allow the user to delete individual Paths they have drawn, I have implemented this solution that uses a bounding Rect for each Path, and then uses an inner multi-dimensional binary array to represent the pixels inside the bounding Rect. I then populate the array by taking the Path's control points and track along it using the mathematical equation for a quadratic bezier curve, setting each element in the array that would have a pixel underneath it to 1.
Using the above setup, when in erasing mode, I first check for collision between the users finger and the bounding Rect, and if that collides, I then check to see if the pixel being touched by the user is set to a 1 in the array.
Now, when a user loads a note, I load all of the shapes into an ArrayList of 'stroke' objects so that I can then easily display them and can loop through them checking for collision when in erase mode. I also store the Rect and binary array with the strokes in the custom object. Everything is working as expected, but the memory footprint of storing all of this data, specifically the binary array for each Path's bounding Rect, is getting expensive, and when a user has a large number of strokes I am getting a java.lang.OutOfMemoryError on the portion of my code that is creating the array for each stroke.
Any suggestions on a better way to accomplish this? Essentially, I am trying to determine collision between two Android Paths (the drawing Path, and then a Path that the user creates when in erase mode), and while the above works in theory, in practice it is not feasible.
Thanks,
Paul
What is the actual representation of the "binary array"? I think if you tweak the representation to reflect the actual data you need to store (for example RLE encode the bits: at this y starting at this x and for z pixels) you will be able to store what you need to without excessive size.
Storing an actual array of bytes with one byte per pixel, or per 8 pixels (if that is what you are doing) isn't necessary for this use.
Another alternative is not to store a bitmap at all, just the control points and bounding boxes. If a touch intersects a bounding box, you calculate the bitmap on the fly from the control points.

Best Strategy for Storing Handwriting

I am writing a mobile app (Android) that will allow the user to 'write' to a canvas using a single-touch device with 1 pixel accuracy. The app will be running on a tablet device that will be approximately standard 8 1/2" x 11" size. My strategy is to store the 'text' as vector data, in that each stroke of the input device will essentially be a vector consisting of a start point, and end point, and some number of intermediate points that help to define the shape of the vector (generated by the touchscreen/OS on touch movement). This should allow me to keep track of the order that the strokes were put down (to support undo, etc) and be flexible enough to allow this text to be re-sized, etc like any other vector graphic.
However, doing some very rough back of the envelope calculations, with a highly accurate input device and a large screen such that you can emulate on a one for one basis the standard paper notepad, that means you will have ~1,700 strokes per full page of text. Figuring, worst-case, that each stroke could be composed of up to ~20-30 individual points (a point for every pixel or so of the stroke), that means ~50,000 data points per page... WAY too big for SQLite/Android to handle with any expectation of reliability when a page is being reloaded and the vector strokes are being recreated (I have to imagine that pulling 50,000+ results from the SQLite db will exceed the CursorWindow limit of 1Mb)
I know I could break up the data retrieval into multiple queries, or could modify the stroke data such that I only add an intermediate point to help define the stroke vector shape if it is more than X pixels from a start, finish or other intermediate pixel, but I am wondering if I need to rethink this strategy from the ground up...
Any suggestions on how to tackle this problem in a more efficient way?
Thanks!
Paul
Is there any reason of using vector data in the first place? Without knowing your other requirements, it seems to me that you just need to store the data in raster / bitmap and compress it with regular compression methods such as PNG / zip / djvu (or if performance suffers, simple things like run-length-encoding / RLE).
EDIT: sorry I didn't read the question clearly. However if you only need things like "undo" and "resize", you can take a snapshot of the bitmap for every stroke (of course you only need to take a snapshot of the regions that change).
Also it might be possible to take a hybrid approach where you display a snapshot bitmap first while waiting for the (real) vector images to load.
Furthermore, I am not familiar about the android cursor limit, but SQL queries can always be rewritten to split the result in pieces (via LIMIT... OFFSET).
Solution I am using for now, although I would be open to any further suggestion!
Create a canvas View that can both convert SVG paths to Android paths, and can intercept motion events, converting them to android Paths while also storing them as SVG paths.
Display the Android Paths to the screen in onDraw()
Write the SVG Paths out to an .svg file
Paul

Categories

Resources