In my Android application I have created an SVG image converter class. It parses through the SVG XML data and converts it into the appropriate Path, Paint and Matrix objects which can then be applied to the Canvas. Using this class I have then implemented a View which uses my SVG converter class to draw images I've produced in Inkscape on the screen. So far, so good. (I appreciate that writing my own SVG converter could be considered reinvention of the wheel considering it's been done before, but for me it's a useful learning exercise in my first Android application, and shall hopefully give me some extra flexibility.)
The purpose of using SVG is so that I can quickly and easily author various designs of graphical gauge. Each gauge typically consists of a section of graphics which only need to be drawn once (e.g. the gauge face and legends), plus graphics which are regularly updated (pointer angle, numeric value text).
At present my gauge View is not very efficient because every time onDraw() is called, my SVG class is called to rattle through the entire SVG file to produce all the vector data for the Canvas.
What I would like to do is to have some intermediate storage of the vector data so that the SVG XML file only need be parsed once. Therefore, I was thinking that the View could lazy-initialize itself from the SVG file on the first onDraw() and then store all of the resulting Paths, Paint and Matrix objects to various Lists. Then, on each subsequent onDraw(), I just pull those out of the List(s) and rattle through those onto the Canvas.
A further advantage of this would be to use separate Lists to store sections of vector graphics that are "moving", e.g. the gauge pointer. I thought of doing this by assigning a certain 'magic' ID to the group of paths in Inkscape that represent the pointer; the SVG parser class would then recognise that this separate group needs to be stored separately to the 'still' graphics. Then, whenever I need to refresh the angle of the pointer in accordance to measurement data, the View will only apply the rotational transform to that bunch of vector data. In fact, I'm thinking of doing it so that the moving pointer graphics are actually drawn in a child View, so that just the child View is redrawn when the pointer has to be refreshed.
The end objective of all of this is this: I (or perhaps even users) could fire up a vector imaging program like Inkscape and quickly produce a new design of gauge widget. I embed a bit of metadata to indicate which bits of the graphics have to be manipulated according to measurement data.
Rather than asking for a solution to a problem as such, I'd like to hear opinions about what I'm doing here, and whether what I'm proposing could be done in a much more optimised way. Could it be very memory inefficient to cache groups of Path and Paint objects?
Furthermore, once it's good enough(!) I'll gladly publish my SVG class somewhere if anyone would find it useful.
Implement and measure!
Start with the straightforward approach--store your parsed vector data in lists in memory. If memory usage and rendering speed is acceptable: problem solved.
If not, try other things and measure. Some ideas:
parse SVG once, render once to Bitmaps, reuse bitmaps
render SVG as a part of build process, ship raster bitmaps with app
Except for the simplest cases, we're not very good with assesing how effective a particular technique is going to be. Thus the popular saying that premature optimization is root of all evil.
Related
I am following this tutorial to add Opengl to my Android app. https://www3.ntu.edu.sg/home/ehchua/programming/android/Android_3D.html. In all the examples the shapes are created in MyGLRenderer constructor but I want to know how I can add Opengl shapes dynamically after the Renderer has been created. How can this be done?
You create an interface or class called Shape. This will contain the data needed to render a 3D shape (vertices, indices, color data, etc). Alternatively the VAO/VBO/texture IDs/other IDs for rendering.
The advantage here of using a class is that you can initialize the methods and keep everything in a single class while maintaining the ability to extend it and create more classes (Cube, Pyramid, Model, etc) to customize your objects. Or just use different instances loaded with different data. There's lots of ways to do the same thing here.
You can add more data after renderer initialization though, but by storing your data, you can reuse it later. Assuming the raw model data is stored in a way that can be reused (for your purpose, to be clear. All data can be reused, but it's pointless to reuse if you can't apply it to your use case), you can apply matrices to get different positions and "instances" of the same object. The point being, you can at any point create new shapes; OpenGL doesn't care if that's when the renderer is initialized or a while after the fact, as long as all the data is in place when you're trying to use it.
After you create the class(es) and relevant instances of them, you create a new list or map:
public List<Shape> shapes = new ArrayList<>();
//add whatever shapes you want. Create them at runtime (generate)
// or keep them static. It is up to you
In this class you create you can also implement a rendering method. In there you draw the objects. An advantage with using a class is that you can add the drawing into this class. If you don't define a specific draw method in the class, you have to manually draw every object in the main rendering method.
Personally, I separate the raw model data (meaning vertices, UV coordinates, etc.) in a single class, that's sourced into a tree of abstraction. For textured models, I currently have a Renderable Entity (where Renderable is an interface containing a draw(Shader)-function - basically one level of abstraction up from Shape). The Entity contains a TexturedModel, which contains a Texture and a Model. The Model contains the raw data, and the Texture contains the relevant texture (because it has to be applied before it's rendered).
There might be more efficient design options than that, but there's lots of ways to abstract object rendering. Like I mentioned, OpenGL doesn't care when you initialize your data, as long as you don't expect it to render data you haven't given it yet. Abstracting your models, shapes, or whatever you wanna render into classes means you have a single, managable,
renderable unit. You can add more on the fly as well -- this has also been proven by every single game with changing scenes or characters/entities in the scene.
To connect this back to the link you've provided, you already have two classes. If you add a super class and add a single list, you can render any number of them. By using matrices (C++/general OpenGL, Android), you can draw these in different positions, if that's what you meant by adding more.
Strictly speaking, with the code in the link, you don't even need to worry about reusability. You need matrices to get several of a single shape in different positions. You can also use uniform variables if you want different colors, but that's a different can of worms you didn't ask about (AKA this is an exercise for the reader; uniforms are a critical part of shaders, and you'll have to learn them at some point anyway).
It's somewhat unclear what you mean by "dynamically" in this case. if you just mean you want more objects with manually generated data, and just randomly add any, a Shape class/interface is the way to go.
If you want dynamic position, you want matrices in the shader.
If you want a pre-made abstraction tree, honestly, there isn't one. You'll have to make one based on what you need for your project. In a case where you only have a few simple geometric shapes, a Shape class makes sense, potentially in combination with the advice from the previous line. When it comes to rendering objects in OpenGL, to put it mildly, there's many ways to Rome, depending on what you need and what you intend to render.
If this answer doesn't directly apply to your situation (either you OP, or you, random reader who happened to stumble over this), I cannot recommend experimenting highly enough. What works is substantially different when you leave the world of tutorials and/or books and/or courses, and enter the world of your own projects. The key takeaways from this answer though though (TL;DR):
Abstract to a class or interface that describes the smallest unit of whatever you want to render; whether that's a Renderable, Shape, Model3D or something completely different is purely down to what you're making
Inheritance and instance reuse is your friend in cases like these
Additional models can be created after your renderer has started if you need it; OpenGL doesn't care when the data is sourced as long as you source it before you try to use it.
Dynamic changes to model data (such as position or color) can easily be done with shaders and uniform values and matrices. Don't forget: if you use VBOs, you can reuse the same model, and have different positions or other attributes and alter how your model is rendered through, among other things, uniform variables.
Further reading/watching
ThinMatrix' series (LWJGL, but explains a lot of theory)
opengl-tutorial.org (C++, a bit of the API matches, but OpenGL and OpenGL ES are different - largely recommend the theory bits over the code bits)
Android developer documentation (covers implementation on Android, as well as theory)
LWJGL GitBook (LWJGL again, but also contains a decent chunk of theory that generalizes to OpenGL ES)
docs.gl - API documentation for OpenGL ES 2 and 3, as well as OpenGL 2-4
Derive Triangle, Quad, Circle, etc.. From a 'Shape' interface that defines the draw() method. http://tutorials.jenkov.com/java/interfaces.html
Then create a List and shove the shapes into and out of it as needed.
http://www.codejava.net/java-core/collections/java-list-collection-tutorial-and-examples
In your onDrawFrame(GL10 gl) method, loop over the shape list.
for( Shape s : shapeList ) s.draw(gl);
Also, probably should add the Shape position to the Shape for the glTranslate Calls
I'm working on a university project in which I need to visualize on a smartphone datas from pressure sensors built in an insole.
I need to draw on a View, as a background, a footprint, something like the image below but just for one foot.
I don't want to use a static image because with different screens resolution it could lose too much quality, so I'm trying to do it by code.
The main problem is that I'm not very skilled in graphic programming so I have not a smart approach to this problem.
The first and only idea that I had was to take the insole's CAD representation, with its real dimensions, scale it in function of screen's ones and put it together using simple shapes (arc, circle, ecc...) available in Android.
In this way I'm creating a Path which will compose the whole footprint, once I will draw it with a Canvas.
This method will let me to do the work but is awful and needs an exceptional amount of work and time to set every part.
I have searched for some similar questions but I haven't found anything to solve my problem.
Is there (of course there is) a smarter way to do this stuff, saving time and energies?
Thank you
of course, you can always use open gl es
this method might not save your time and energy but will give you a better graphic and scope to improve it for later purpose
the concept is to create a float buffer with all the points of your footwear and then draw is with connected lines on a surfaceView
to get started with open gl es you can use this tutorial : https://developer.android.com/guide/topics/graphics/opengl.html
and here is an example : https://developer.android.com/training/graphics/opengl/index.html
I would like to create a Canvas instance that is too big to be backed by a heap memory Bitmap, lets say 5000x5000 pixels (approx. 95MB). I would like this very large Canvas to send all the various draw operations directly to a bitmap file. Unfortunately the Bitmap class in Android is marked final so I can't provide my own implementation. Does anyone have an idea if and how this might be accomplished? I'm not very interested in performance, 10 seconds to write make a few dozen draw operations is fine, the goal is to not get out of memory errors.
There's no facility to provide the function you are asking, and even if there were, to do such operations to a file would provide horrendous performance.
Probably the only reasonable way is to only store the drawing operations, and create a Canvas that is the same size as the device screen that would serve as a "window" into the while 5000x5000 pixel canvas. For detailed explanation see my answer to a related question here: Android - is there a possibility to make infinite canvas?
Here is an idea I had that I think could theoretically work, but would probably require far too much effort:
Create a subclass of Canvas that contains many smaller Canvas objects inside it. These would represent tiles of the overall Canvas. These tiles should be small enough to fit in memory at least one at a time. Create one file for each inner tile Canvas and use it to store uncompressed pixel data from a Buffer.
When a draw operation occurs on the overall Canvas figure out which tiles need to be drawn to. One at a time read the file for that tile into a Bitmap in memory and perform the possibly clipped draw and then save the Bitmap data back to the file and close it.
Theoretically it sounds possible, realistically it sounds like too much work.
I am trying to make a game for the android. I currently have all of my art assets loaded into the drawables folder, but my question is how do I actually reference a specific one to render it?
I know that the files each have a unique #id, and I probably have to implement in the #override of the onDraw(canvas) method. so the question is how do I actually display the image to the screen?
I have looked through a good half dozen books and all those talk about is getting an image off the web, or manually drawing it with the paint functionality, but I have the assets ready to go in .bmp, and they are complex enough that coding paint to draw them by hand will be a very great migrain.
I would prefer a direction to look in (specific book, tutorial, blog, open source code[assuming quality comments])I am still kinda learning java as my 2nd programming language and so I am not to the point of translating pseudocode directly to java yet.
[added in edit]
does drawing using Paint() every frame have a lower overhead then rendering a bitmap, and only changing the display when it changes?
for 2Dgames I would recommend you to use SurfaceView
load your image as a bitmap and let the canvas draw that bitmap.
to animate the images there are two possible way :
You could inflate the view
Create a looping thread to call the draw function
Here's some good starting tutorial about displaying image in Android, especially for games
You can use one of the drawBitmap variants of the Canvas class. A canvas object is passed in the onDraw method. You can use the BitmapFactory class to load a resourse.
You can read a nice tutorial here.
I am writing a mobile app (Android) that will allow the user to 'write' to a canvas using a single-touch device with 1 pixel accuracy. The app will be running on a tablet device that will be approximately standard 8 1/2" x 11" size. My strategy is to store the 'text' as vector data, in that each stroke of the input device will essentially be a vector consisting of a start point, and end point, and some number of intermediate points that help to define the shape of the vector (generated by the touchscreen/OS on touch movement). This should allow me to keep track of the order that the strokes were put down (to support undo, etc) and be flexible enough to allow this text to be re-sized, etc like any other vector graphic.
However, doing some very rough back of the envelope calculations, with a highly accurate input device and a large screen such that you can emulate on a one for one basis the standard paper notepad, that means you will have ~1,700 strokes per full page of text. Figuring, worst-case, that each stroke could be composed of up to ~20-30 individual points (a point for every pixel or so of the stroke), that means ~50,000 data points per page... WAY too big for SQLite/Android to handle with any expectation of reliability when a page is being reloaded and the vector strokes are being recreated (I have to imagine that pulling 50,000+ results from the SQLite db will exceed the CursorWindow limit of 1Mb)
I know I could break up the data retrieval into multiple queries, or could modify the stroke data such that I only add an intermediate point to help define the stroke vector shape if it is more than X pixels from a start, finish or other intermediate pixel, but I am wondering if I need to rethink this strategy from the ground up...
Any suggestions on how to tackle this problem in a more efficient way?
Thanks!
Paul
Is there any reason of using vector data in the first place? Without knowing your other requirements, it seems to me that you just need to store the data in raster / bitmap and compress it with regular compression methods such as PNG / zip / djvu (or if performance suffers, simple things like run-length-encoding / RLE).
EDIT: sorry I didn't read the question clearly. However if you only need things like "undo" and "resize", you can take a snapshot of the bitmap for every stroke (of course you only need to take a snapshot of the regions that change).
Also it might be possible to take a hybrid approach where you display a snapshot bitmap first while waiting for the (real) vector images to load.
Furthermore, I am not familiar about the android cursor limit, but SQL queries can always be rewritten to split the result in pieces (via LIMIT... OFFSET).
Solution I am using for now, although I would be open to any further suggestion!
Create a canvas View that can both convert SVG paths to Android paths, and can intercept motion events, converting them to android Paths while also storing them as SVG paths.
Display the Android Paths to the screen in onDraw()
Write the SVG Paths out to an .svg file
Paul