I'm planing to create tablet app. I would ask for some guidance.
I have pictures in SVG format like this one.
With SVG it is easy, You just change fill parameter to different color but as I understand, there is no easy/stable svg processing to use with libgdx. I still want to use svg files to create/store images for my app.
What processing path would You recommend?
Is there an easy way to convert svg paths/shapes for com.badlogic.gdx.math.bezier or polygon objects and then draw them on screen/get user input (tap) inside this shapes?
Or should I use different objects/path?
Shapes could be grouped together for example I want two windows in a house to change color at once.
The way LibGDX is written is it gives you a lower level way to do this type of rendering, but doesn't offer out of box ways to render SVG. It really depends on if you are looking for something with performance or just simply want it to draw basic shapes.
To simply render shapes you could use something like ShapeRenderer. Which gives you a very close interface to the Java2D way to draw things. Maybe to quickly draw some basic stuff this might be handy.
If you are wanting to do some more robust version of rendering you will probably want to look into using a Mesh and working with shaders for OpenGL ES. You can find examples of these in the LibGDX tests as well as searching online for examples/tutorials.
If you want to turn the SVG into a texture you will want to look at Pixmap and then you can create a Texture with it and render with a Spritebatch. You will need to write the pixels you want to color and such with the Pixmap. However, doing it this way will generate an unmanaged texture (i.e. you will have to rebuild it when a user comes back to the app after pressing back or putting the device to sleep).
Related
I want to develop same UI like this.
In this UI you can see yellow shaded style on right up corner. I want to ask that how can I develop this type of asset in android?
You can do both or even import an SVG which is usually my preferred way. On the link i passed you will see examples of all different methods so I will just give you quick summary here.
If you decide for image-based formats like JPEG,PNG... you will have to import it multiple times for different screen sizes in order not to lose sharpness which will take more storage space
If you decide for vector based graphics like Xml drawables from android or SVG, you end up with more flexibility and programmability of your graphic and less storage space needed. However it can decrease performance of the app if graphic is complicated (not in your case yours is super simple) because it uses computing power to render the vectors instead of just showing file from storage
I am new to android and ARToolkit.I have to develop the android application which can augment and render the 3D models from CT scan images in DICOM format on the detected marker. I am using ARToolkit SDK for my purpose. But don't how to proceed with the dicom files and render the 3D model on marker. Someone please suggest some approach. Any sort of help will be highly appreciated.
Thanks
I recommend the following process;
Figure out a tool for segmentation. This is the process whereby you will build a 3d model of subset of the data depending on density. For example, you will build a model of the ribs of a chest CT. You should do this outside of Android and then figure out how to move it later. You can use tools like ITK and VTK to learn how to do this stage.
If you want to avoid the ITK/VTK learning curve, use GDCM (grass roots dicom) to learn how to load a DICOM series. With this approach you can have a 3D array of density points in your app in a few hours. At this point you can forget about the DICOM and just work on the numbers. You still have the segmentation problem.
You can look at the NIH app ImageVis3D which has source code and see what there approach is.
Once you have a segmented dataset, conversion to a standard format is not too hard and you will be on your way.
What is the 'detected marker' you refer to? If you have a marker in the image set to aid in segmentation, you can work on detection from the 3d dataset you get back from loading the dicom data.
Once you have the processes worked out, you can then see how to apply it all to Android.
It seems a little old but, recommended for a start: Android OpenGL .OBJ file loader
I was wondering too about building a CustomView to address your needs, since in a CV you can display anything.
I am developing a peer to peer collaborative drawing app on android using alljoyn framework like chalkboard .
I am able to implement collaborative chat among peers. Now i want to implement canvas sharing where in a single canvas everyone will be able to draw in real time.
How can i start with canvas, what would be its data structure,is there any specific image object i need to handle,do i need to use json,do i have to store the pixel values in a 2D array.
I need only a black & white screen with white background and black as drawing part.
I just want to know the approach behind it. Any reference will be helpful.
thanks...
Canvas is really a bitmap.
You add/change pixels on the bitmap using drawing commands.
To do collaborative drawing, you wouldn't share the pixel values between all users with each change.
That would create bottlenecks in serializing, transport and deserializing. It would be too slow to work.
Instead, share the latest drawing commands between all users with each change.
If user#1 draws a line from [20,20] to [100,100], just serialize that command that drew the line and share that with all users.
Perhaps the serialization might look like this: "L 20,20 100,100".
If you want an efficient serialization structure, take a look at the way SVG does it's path data--very efficient for transportation to many users.
All other users would listen for incoming commands. Upon receipt, they would deserialize user#1's line and have it automatically drawn on their own canvas.
Application size on a phone needs to be as small as possible. If I have an image of a sword and then a very similar image of that same sword except that I've changed the color or added flames or changed the picture of the jewel or whatever, how do store things as efficiently as possible?
One possibility is to store the differences graphically. I'd store just the image differences and then combine the two images at runtime. I've already asked a question on the graphic design stackexchange site about how to do that.
Another possibility would be that there is that apk already does this or that there is already a file format or method people use to store similar images in android.
Any suggestions? Are there tools that I could use to take two pngs and generate a difference file or a file format for storing similar images or something?
I'd solve this problem at a higher level. For example, do the color change at run-time (maybe store the image with a very specific color like some ugly shade of green that you know is the color to be fixed at run-time with white or red or blue or whatever actual color you want). Then you could generate several image buffers at load-time.
For compositing the two images, just store the 'jewel' image separately, and draw it over the basic sword. Again, you could create a new image at load-time, or just do the overdraw at run-time.
This will help reduce your application's footprint on flash, but will not reduce the memory footprint when the app is active.
I believe your idea of storing the delta between 2 images to be quite good.
You would then compress the resulting delta file with a simple entropy coder, such as Huffman, and you are pretty likely to achieve a strong compression ratio if similarities with base image are important.
If the similarity are really very strong, you could even try a Range Coder, to achieve less-than-one-bit-per-pixel performance. The difference however might be noticeable only for larger images (i.e higher definition than a 12x12 sprite).
These ideas however will require you or someone to write for you such function's code. This should be quite straightforward.
An very easy approach to do this is to use an ImagePack ( one image containing many ) - so you can easy leverage the PNG or JPG compression algorithms for your purpose. You then split the images before drawing.
This is very theoretical question about general knowledge.
First of all I dont have so far alot of understanding about things in Open GL so please forgive me.:
The best idea to load a 3D Model into android is using Waterfall .obj files yes?
I downloaded sample model for sketchup (some robot model with alot of parts) and the .obj file has size of 3mb. I loaded it into vector of strings (almost 100k of lines) and the application is +15mb's heavier in ram usage!!!! So I am a bit concerned about this whole method.. and approach?
When I will load the model is there a simple way of rotating it and moving. Will it be like single object in open GL or do I need to multiply all thousands of verticals by matrix?
Is there anything else I should add to my understanding.
I can't answer all of your questions, but:
3) Yes. You can combine the Android framework's onTouchEvent() functionality to work with OpenGL. In OpenGL, you can rotate things very easily with simple glRotate(angle) calls (which will rotate everything for you), where the provided angle is variable based on your touch interaction.
EDIT::
2) Why are you loading it into Strings? I don't know models very well, but I parse many files. You should load into the smallest size variable you can. For instance an ArrayList of shorts, or something. I don't know your data, but that is the best way. Consider parsing in portions if you have memory issues.